Bengaluru, 24 March 2026: InCruiter, a leading AI-powered interview intelligence platform based in Bengaluru, uncovered a sophisticated deepfake impersonation attempt during a recruitment process conducted for a global fintech and private credit platform client.
The client operates in the global fintech and private credit ecosystem, providing technology solutions that power portfolio management, risk infrastructure, and financial data operations for some of the world’s largest banks, private debt funds, and asset managers. Given the sensitive nature of such systems, hiring decisions particularly for technical roles carry significant implications for security, compliance, and operational integrity. To streamline candidate screening for critical technology roles, the global fintech organization had deployed InCruiter’s AI Interview Software, which enables fully automated interviews conducted by an AI interviewer without requiring a human recruiter to be present in the session.
During one such interview, conducted on InCruiter’s platform, a participant was answering technical questions and engaging naturally the interaction which initially appeared normal. However, the session was flagged by InCruiter’s continuous deepfake detection system, which identified subtle visual anomalies suggesting the presence of a synthetic identity overlay. Further analysis revealed that the individual on screen was not the real candidate, but an AI-generated avatar designed to replicate the candidate’s appearance and voice, likely intended to bypass the automated evaluation process. The platform detected behavioural and visual inconsistencies throughout the session and generated a detailed proctoring report containing timestamped evidence, trust scores, and flagged indicators of manipulation. This structured evidence enabled the client to quickly review the incident and take corrective action, resulting in the candidate being rejected before advancing further in the hiring process.
Deepfake interview fraud is no longer a future threat, it is happening right now in every hiring cycle. Across the industry, cheating in online interviews has been occurring at a rate of 10 to 15%. However, this number only reflects what was being caught. The actual fraud was always higher. When InCruiter launched its Deepfake Detection technology in early 2026, the data confirmed this gap, with the system flagging fraudulent activity in 25 to 30% of suspicious interview sessions, nearly double what even expert human interviewers were identifying before.
The increase is driven by how accessible these tools have become. A candidate today does not need technical expertise to run an AI voice tool during a live interview, making this a systemic risk rather than an edge case.
Based on InCruiter’s observations, IT and tech companies represent the largest share of such fraud at around 60%, followed by BFSI at 15%, BPOs and KPOs at 10%, startups at 10%, and manufacturing and core sectors at 5%. Deepfake fraud does not target small or large companies specifically, it targets the process, making every organisation conducting virtual interviews vulnerable.
Anil Agarwal, Founder and CEO of InCruiter, said, “AI-led interviews are rapidly becoming the future of hiring at scale. But as organisations adopt automation, they must also be prepared for increasingly sophisticated forms of fraud. Deepfake impersonation is a real and emerging risk. Our platform is designed to continuously monitor interview integrity and detect such anomalies, ensuring that companies can trust the outcomes of automated hiring processes. Over the years, we have facilitated over 15 million interview minutes across enterprise clients, and even before launching deepfake detection, 10-15% of online interviews showed clear signs of cheating. With the introduction of this technology, we are now able to identify 25-30% fraudulent activity in suspicious sessions, highlighting the scale of AI-assisted fraud that was previously going undetected.”
The incident highlights the growing importance of advanced fraud detection and proctoring systems as organisations increasingly adopt automated, AI-led hiring models. The case reflects an emerging challenge in the recruitment landscape, where advances in generative AI and deepfake technology are beginning to intersect with automated hiring systems. Without strong verification mechanisms, such attempts could potentially allow impersonation during remote or AI-led interviews.
As AI-powered recruitment continues to scale across industries, platforms that combine automation with robust identity verification and proctoring will play a critical role in safeguarding the integrity of hiring decisions.