/dq/media/media_files/2026/02/16/dq_lead_ai_fraud_bfsi-2026-02-16-17-04-24.jpg)
AI for Fraud Prevention panel discussion
We have been covering some standout sessions from the India AI Impact Summit 2026, and this one brought the trust question into sharp focus for BFSI: as digital onboarding, instant payments and always-on credit scale up, fraud prevention cannot rely on manual oversight or fragmented, rule-based controls. The panel argued that AI must now be treated as core trust infrastructure—driving real-time risk intelligence and resilient controls without increasing friction or exclusion for first-time users and MSMEs.
AI is no longer an “add-on” in banking. It is increasingly the layer that decides whether a system is safe enough to scale. As BFSI expands through digital onboarding, instant payments and embedded credit, the panel on AI for fraud prevention and financial inclusion made a blunt point: trust can no longer depend on manual oversight or static rules. Fraud is connected and adaptive, while defences remain fragmented. The response, the discussion suggested, is to reframe AI as a trust infrastructure—continuous, real-time, and auditable.
Following the early summit sessions that framed India’s AI moment as a “scale test” defined by trust and governance, this BFSI conversation carried that logic into the financial system—where the costs of false positives, bad credit decisions and fraud losses show up immediately, and where “stronger controls” can easily become “higher friction” that excludes the very people digital finance is meant to include.
The session featured Bhuvan Lodha (Mahindra & Mahindra Limited), Manish Agarwal (Kotak811, Kotak Mahindra Bank), Neeraj Aggarwal (BCG), Saurabh Mittal (DBS India), Srijay Ghosh (Temasek), Suresh Sethi (Protean eGov Technologies), and Abhishek Singh (Ministry of Electronics & IT, Government of India).
Why trust can’t be rule-based anymore
The panel’s baseline assumption was simple: BFSI is moving from periodic checks to continuous risk. Digital onboarding, transactions and networked payments create a system where fraud patterns evolve faster than manual teams can respond. Traditional rule engines—static thresholds, hard-coded triggers—cannot detect subtle behavioural anomalies at scale, identify mule networks reliably, or surface systemic risk in real time.
AI, the panel argued, shifts the paradigm from after-the-fact detection to end-to-end intelligence. If designed correctly, it can reduce false positives, lower friction, and improve customer experience—without weakening controls. That “if designed correctly” became the centre of gravity: explainability, privacy, auditability, and fairness are not compliance add-ons; they are what makes AI deployable in a regulated, high-trust sector.
From pilot anxiety to production mindset
A recurring theme was that scaling AI is less a technology problem and more a mindset shift. Humans tolerate human error as “part of the job,” but hold machines to a higher standard, often declaring AI unfit for scale after early mistakes.
The panel argued for a more realistic framing: AI first catalyses capacity—doing in minutes what earlier took hours—before it creates entirely new capabilities. In BFSI, that capacity shift has real implications: a loan decision that includes fraud checks, risk signals and straight-through processing in minutes is not just a convenience; it is a scale requirement for India.
This also came with a caution: AI must be treated as a learning system that improves with data and feedback, not as a rule engine dressed up with AI labels.
Explainability as a three-part contract
When AI starts approving or denying credit, halting transactions or flagging identities, explainability becomes a trust contract between three stakeholders: customers, auditors and regulators. One practical framework discussed was JCT—justifiable, contestable, traceable.
Justifiable means the institution can explain why a decision was taken—why credit was denied, why a payment was stopped, why onboarding was rejected. Contestable means the affected party has a path to challenge the outcome and request reasons. Traceable means the institution can go back through model lineage, decision logs, and evidence trails to detect bias, drift, or flawed signals.
The panel underlined that human oversight still matters, especially in high-risk flows. A real-world example cited was video KYC—where automated checks may run, but human review remains part of the control design. The larger point was that “human in the loop” is not a philosophical preference; it is how regulated trust systems stay defensible.
Lower friction without unintended exclusion
The panel repeatedly returned to the hardest design problem: reducing friction without excluding new users, low-literacy users, or MSMEs that lack documentation depth. Here, the discussion leaned toward tiered trust models.
Low-risk onboarding can be fast and low-friction, while higher-risk journeys add progressive verification—video KYC, enhanced authentication, or physical verification where necessary. The idea is to match friction to risk rather than applying maximum friction to everyone.
A striking example discussed was a consumer protection feature used by a fintech abroad—where a customer can mark “safe places” such as home and office, and when the device is used elsewhere, higher-value transfers trigger additional authentication and time delays. The broader argument was that context-aware controls can raise security without creating blanket friction.
Guardrails: every fast car needs brakes
If trust is the destination, guardrails are the operating system. The panel described that any high-speed AI deployment needs guardrails that ask: can we do it (regulatory permission), should we do it (purpose and fairness), and how do we do it (governance, monitoring, controls).
One governance emphasis was bias: historical data can reproduce historical exclusion. Without safeguards, AI can scale bias more efficiently than humans ever could. “Respectfulness” and fairness are not abstract values in BFSI; they are the difference between inclusion and automated denial.
The data-sharing frontier: why fraud needs connected defences
A major pain point identified was that fraud is connected, while institutional defences are siloed. The discussion argued that regulators enabling data sharing across institutions—done responsibly—may be one of the most important levers for modern fraud prevention.
The panel leaned into an Indo–Singapore corridor lens as a case study: India brings population scale, digital public rails, diverse datasets and deep talent; Singapore brings strong governance frameworks and experience in operationalising advanced technology responsibly. The combined opportunity is to build shared intelligence infrastructure across borders.
One proposed starting point was a mule account registry and cross-border risk signals—built on anonymised patterns rather than customer-identifiable data. The claim was that shared, privacy-preserving intelligence can strengthen cross-border rails, disrupt fraud supply chains, and enable safer onboarding and payments for citizens and MSMEs operating across corridors.
Why investors care: trusted infrastructure lowers the cost of capital
From the long-term capital lens, the panel argued that convergence in BFSI will be strongest not in consumer fintech apps but in infrastructure—because that is where network effects compound. Trusted data enables trusted models; trusted models enable better capital allocation; better capital allocation reduces systemic risk and improves inclusion outcomes.
The conversation also expanded “inclusion” beyond banking interfaces: the highest impact may come when finance is embedded in production networks—supply chains across agriculture, renewables and labour markets—where trust infrastructure connects identity, productivity and credit.
The closing reality: 10/20/70 and the people problem
The session ended on a human note: technology and algorithms matter, but people matter more. The panel invoked the familiar framing—10 is technology, 20 is algorithms, 70 is people—and warned that large-scale change agendas over-focus on tools and under-invest in human factors: learning, unlearning, accountability, empathy, and institutional readiness.
That framing also shaped the final reflections. The upside was speed and productivity—moving up the value chain as individuals and enterprises. The risks centred on dilution of human accountability, loss of empathy as systems automate, and exclusion of those who adapt slowly. The answer, the panel suggested, is not to slow down—but to design assisted digital pathways so inclusion is protected even as systems become more AI-native.
Parting shot
For BFSI, AI is no longer a feature; it is emerging as the trust layer that determines whether digital finance can scale safely. The panel’s north star was clear: build AI as infrastructure—real-time, explainable, auditable and privacy-preserving—so stronger controls reduce fraud without increasing friction, and so inclusion expands without turning into automated exclusion.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us