Globally, financial institutions have accelerated digital business transformation programs for long-term growth and profitability. Embedded AI is at the heart of this digital transformation and its applications are gaining momentum for a wide range of activities, from backend operations to customer-facing payment systems. According to McKinsey, AI can help banks unlock US$1 trillion in value annually. These gains in performance will flow from AI-fueled customer experiences, operations, and products. N. Sathish Dy CPO, FSS Retail Payments talks about the role of AI in payments, the deployment of the tech by banks and how FSS uses AI for better customer experience.
Elaborate the role of AI in payments.
As a services enabler, AI impacts payment service providers in three ways. AI can boost revenues through new engagement models for customers and contextual commerce experiences; creating new operating models that positively impact the bottom line through efficiencies generated by higher process automation, and three, uncover new opportunities based on an improved ability to process and generate insights from vast reams of payments data.
The extent to which payment service providers effectively leverage the potential of AI is dependent upon the digital maturity and the creativity and skills of the organizations. FSS is working on numerous AI-led products and collaborating with partners and financial institutions to developing exciting new use cases to power a future of intelligent payments.
How can banks deploy AI in their digital transformation journey?
AI is a journey and not a silver bullet. Financial institutions looking to master AI-led initiatives, need to tightly integrate AI initiatives with the broader digital transformation journey to create meaningful value and scale. According to BCG, companies that connect AI and digital initiatives are 12 percentage points more likely to see revenue impact, and 20 percentage points more likely to have seen either cost or revenue impact.
The ability to generate strategic business value with AI depends on having access to data. A strong data foundation is at the core of an AI-led enterprise. Companies like Google, Amazon, and Apple, dominate their industries because they were the first to begin building data sets which helped them attain a steep competitive edge. For FIs, the equation is similar. The sooner a FI builds a strong data foundation, the sooner they can execute advanced AI and ML models. With each iteration, they will put more distance between themselves and the competition.
When beginning to adopt AI, banks need to ensure data liquidity —that is, an ability to access, ingest, and import it into a common system. Machine learning algorithms can subsequently be applied to mobilize and interpret the data into insights that were otherwise untapped – performing tasks like risk scoring or sentiment analysis – all while continuously optimizing insights through feedback loops.
Simultaneously, greater application modernization and agility is needed to unlock the value from AI. Banks need to update current monolithic and inflexible application infrastructures to scalable, cloud-native app environments by using modern technology stacks, ensuring the target state is scalable and flexible enough to support evolving business needs.
It is not enough to invest in cutting-edge technologies and algorithms alone. In conjunction, FIs need to rewire operating model, processes, and culture to extract value—and invest in human capabilities to make it stick.
The underpinning value of artificial intelligence comes from breaking down the silos and fostering cross-functional cultures. Also in most organizations, the skill sets and success profiles of the workforce (and the talent pools from which they will come) will be materially different in the next decade than they are today. According to the BCG, the pioneers of AI at scale — the companies that have scaled AI across the business and achieved meaningful value from their investments — typically dedicate 10% of their AI investment to algorithms, 20% to technologies, and 70% to embedding AI into business processes and new ways of working.
How is FSS using AI in their offerings and how is it helping their customers?
A series of interrelated technologies around machine learning and natural language underpin all of AI. To stay competitive in the AI-powered digital era, FSS is leveraging these fundamental technologies, to help financial institutions shape individualized, invisible, and intelligent payment experiences. Specific use cases include:
- Customers expect an always-on, always-me banking experience. FSS omni-channel, open banking solution delivers adaptive customer experiences that blend banking capabilities with customer transactional, location and social data to anticipate latent needs and offer highly tailored services at the right time, through the right channel.
- Conversational banking and UPI payments to deliver convenient, engaging, and interactive transacting experiences to customers. Customers can conduct a range of financial and non-financial transactions using natural language capabilities.
- Adaptive risk-based authentication leveraging hundreds of behavioral and transaction data points to proactively identify normative transacting patterns and identify anomalous transactional behavior
- AI-led ATM Monitoring – Predictive ATM maintenance and cash flow management to optimize costs and boost performance of ATM networks.
- AI-powered payments decisioning platform synthesizes data from discrete payment systems to provide banks a strong data foundation to optimize their payments strategy.
Leveraging AI comes with security and privacy concerns. How can banks mitigate these risks?
The deployment of AI within financial services raises many questions around data protection, security and most importantly, the ethical use of insights gained from personal data.
Whilst digitisation and AI harness big data. To borrow the words of former United States Secretary of Defense Donald Rumsfeld, people’s data can be categorised into three main categories:
- “Known knowns” are data that we know exist and that we know. For example, someone who opens a bank account provides their name, address, telephone number, gender, date of birth, and so on. These are hard data that we can check and validate and for which evidence can be provided to prove their authenticity.
- “Known unknowns” are data that arise from people’s activities. They include health data recorded on digital health apps well as data recorded during online shopping. We know that such data exist, but we do not know how they are packaged, anonymised, and used – or how they may be sold to data brokers.
- “Unknown unknowns” are data that are created without consumer knowledge. People have few – or no – opportunities to validate such data and whether they accurately describe them and their behaviour. For example, companies may have created a digital profile of each customer based on online behaviour such as YouTube searches and Netflix accounts.
AI comes into play in the second and third category. The challenge lies in AI’s ‘black box’ problem and the inability to see the inside of an algorithm and therefore understand how it arrives at a decision. People do not know what profiling buckets have been created to represent their digital attributes. For instance, AI systems have offered lower credit card limits to women than men despite similar financial profiles.
Financial institutions need to manage tradeoffs between ethics, privacy to leverage AI for good. In April 2019, the EU High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence:
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit, or misguide human autonomy.
- Robustness and safety: Trustworthy AI require algorithms to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills, and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
Regulators as well as enterprises need to take steps to open the ‘black box’, establish regulations and policies to ensure transparency and evaluate if the data is fair and unbiased. Whilst these are initial guidelines there are no straightforward solutions and this is an area that will have to be constantly revisited by AI ethic boards of FIs as well as regulators.