“The financial ecosystem must treat cyber resilience as a core business imperative. This is not just a banking issue—it’s a national security concern,” Jaya Vaidhyanathan, CEO, BCT Digital
Jaya Vaidhyanathan is the CEO of BCT Digital, a leading FinTech, RegTech, and SustainTech company driving AI-led innovation in fraud prevention and risk management. Ms. Jaya serves as a Non-Executive Director on the boards of PwC Global and PwC India. She is also an Independent Director at UTI Asset Management Company, IndiGrid, and Godrej Properties. With over two decades of experience across HCL, Accenture, and Standard Chartered. Under her leadership, BCT Digital’s rt360 RTMS was recently recognized by the Reserve Bank of India for revolutionizing proactive fraud detection in public sector banking. A Cornell MBA and CFA Charterholder, Jaya combines deep tech expertise with a sharp financial acumen, leading initiatives that safeguard over USD 300 billion in public assets.
In this exclusive conversation, Jaya shares how India’s PSU banks are embracing intelligent surveillance and why AI-first risk management is now a national imperative.
How has the landscape of financial fraud evolved in recent years, and what new challenges do banks and financial institution’s face today?
The financial fraud landscape in India has undergone a significant shift, marked by a sharp rise in both the volume and sophistication of digital scams. As per the RBI’s Report on Trend & Progress of Banking in India (Dec 2024), bank fraud cases surged by 27% in the first half of FY2024-25, with the value involved increasing nearly eightfold. Notably, over 85% of these incidents were internet and card-related frauds.
Fraud is no longer isolated or opportunistic—it has become systemic, orchestrated by organized, tech-enabled networks employing AI, anonymized mule accounts, and social engineering tactics. Financial institutions must move beyond traditional, reactive approaches and embrace AI-native fraud detection architectures, prioritize customer digital literacy, and foster deeper collaboration with national cybercrime units such as the Indian Cyber Crime Coordination Centre (I4C).
At a strategic level, the financial ecosystem must treat cyber resilience as a core business imperative. This is not just a banking issue—it’s a national security concern. I strongly believe that there is huge scope for the central government to play a leading role in orchestrating a unified, national response. The financial security of millions of Indians depends on it.
What key AI-driven technologies are proving to be the most effective in combating fraud?
AI has become an indispensable ally in the fight against financial fraud, enabling institutions to respond with speed, scale, and precision. Several technologies are proving especially impactful:
Machine Learning (ML): Enables real-time anomaly detection across vast and complex datasets, enhancing accuracy and reducing detection latency
Natural Language Processing (NLP): Detects linguistic manipulation in phishing attempts and social engineering tactics, helping pre-empt fraud at the source
Behavioural Biometrics: Identifies deviations in user behaviour to prevent account takeovers and insider threats
AI-powered Identity Verification: Secures digital on-boarding processes, mitigating risks of synthetic and identity fraud
When these capabilities are integrated into a unified fraud management framework, they enable institutions to shift from reactive defence to proactive prevention—creating adaptive, real-time resilience.
What measures should banks take to ensure transparency, fairness, and ethical AI usage in fraud prevention?
As AI becomes integral to fraud prevention, ensuring its ethical and transparent use is not optional—it’s a strategic and regulatory imperative. Banks must adopt a holistic approach that spans the entire AI lifecycle, from model design to deployment and continuous oversight. Key measures include:
Robust Data Governance: Leveraging representative, bias-tested datasets while ensuring data anonymization and encryption to safeguard privacy
Bias Mitigation: Embedding fairness metrics, bias-detection tools, and accountability into every stage of the model lifecycle
Strong Governance Structures: Establishing cross-functional AI ethics committees and well-defined accountability frameworks to oversee responsible usage
Transparent Customer Communication: Offering clear, accessible disclosures around how AI is used in fraud detection and what it means for the end user
Ongoing Monitoring and Auditing: Conducting regular performance assessments, adversarial simulations, and continuous model recalibration to stay ahead of evolving threats
At a leadership level, we must champion AI that is not only intelligent but also ethical by design. When transparency, fairness, and governance are embedded into the core of AI operations, we build trust—not just in technology, but in the financial system itself.
What are the emerging trends in AI-driven fraud detection that banks and FIs should be aware of, and how can they prepare for the future of fraud prevention?
The fraud landscape is evolving at an unprecedented pace, driven by rapid advancements in technology and increasingly sophisticated threat actors. Key emerging trends in AI-driven fraud detection include:
Deep Learning Models: Capable of identifying nuanced fraud patterns across high-volume, high-velocity transactional data
Generative AI Threats: The rise of deepfakes, synthetic voices, and hyper-realistic phishing scams presents a new frontier in fraud risk
Real-Time Detection and Response: Increasingly critical in minimizing turnaround times and financial exposure
Explainable AI (XAI): Transparency in AI decision-making is becoming essential—not only for compliance but to maintain stakeholder trust
To stay ahead of the curve, banks and financial institutions must take a proactive and future-ready approach:
Invest in scalable, AI-native infrastructure that can adapt to new fraud vectors
Strengthen data governance with privacy-first, fairness-focused practices
Upskill internal teams to understand emerging fraud typologies, AI risks, and evolving regulatory frameworks
Engage in shared threat intelligence ecosystems and regulatory sandboxes to foster collaboration and innovation
Ultimately, the future of fraud prevention lies in balancing agility with accountability. Institutions that embed explainability, resilience, and trust into their AI strategy will not only protect their customers—but shape the future of responsible finance.
What role does behavioural analytics play in AI-driven fraud detection, and how can it enhance risk assessment?
Behavioral analytics is the intelligence backbone of AI-driven fraud detection. By continuously analysing users’ transactional and interaction patterns, it establishes a dynamic baseline of expected behaviour. Any deviation—be it in device usage, navigation patterns, or transaction velocity can instantly flag anomalies, enabling real-time intervention.
In the context of risk management, this translates to continuous authentication, micro-segmentation of users based on behavioural risk, and prioritized escalation of high-risk events for human review. It transforms fraud detection from a rules-based model to a contextual, real-time decisioning framework. This is the only way to stay ahead of fraudsters who always seem to be better equipped than those with traditional counter fraud measures.
What are the regulatory and compliance challenges associated with AI-driven fraud detection, and how can financial institutions navigate them effectively?
AI-driven fraud detection brings immense promise—but also introduces a new layer of regulatory and compliance complexity. From explainability and data privacy to algorithmic fairness and cross-border regulation, the expectations are evolving rapidly. Advanced AI models often function as black boxes, making it difficult to justify decisions to auditors, customers, or regulators.
With regulatory frameworks like GDPR, India’s DPDP Act, and the RBI’s KYC/AML guidelines, the need for responsible, auditable AI is more urgent than ever.
Financial institutions must move beyond compliance as a checkbox and embrace it as a design principle. This requires:
Explainable AI (XAI): Ensuring model decisions are transparent, interpretable, and regulator-ready
Human-in-the-loop governance: Integrating human oversight for high-stakes or ambiguous cases
Bias audits and model validations: Conducting these routinely to ensure fairness and accountability
Regulatory interoperability: Designing systems that meet global standards while staying aligned with local regulatory nuances