
India moved from third to second position globally, overtaking the UK in phishing attack volumes compared to last year
Global phishing is down 20%, but attackers are striking deeper, not wider—targeting IT, HR, finance, and payroll teams with high-impact campaigns.
Telegram, Steam, and Facebook are top platforms for phishing – used for both impersonation and malware delivery.
Tech support and job scams increase with 159M+ hits in 2024, preying on users across social platforms.
Zscaler, Inc. (NASDAQ: ZS), the leader in cloud security, today announced the release of its Zscaler ThreatLabz 2025 Phishing Report, analyzing over two billion blocked phishing attempts blocked across the Zscaler Zero Trust Exchange™ between January and December 2024, the world’s largest cloud security platform. The annual report exposes how cybercriminals are using Generative AI to launch surgical, targeted attacks against high-impact business functions – and why a Zero Trust + AI defense strategy is mission critical. The report uncovers a shift from high-volume email blasts to targeted AI-fueled attacks designed to evade defenses and exploit human behavior. It also offers actionable insight to help organizations defend against this evolving threat landscape.
Globally, phishing volumes dropped by 20%, in part due to increased adoption of email authentication protocols. However, the nature of threats has evolved rapidly, with cybercriminals now using generative AI to deliver personalized lures, deepfake content, and malware-laced fake services on scale. Microsoft remained the top exploited brand, impersonating 51.7% of phishing attacks globally.
“The phishing game has changed. Attackers are using GenAI to create near-flawless lures and even outsmart AI-based defenses,” said Deepen Desai, CSO and Head of Security Research, Zscaler. “Cybercriminals are weaponizing AI to evade detection and manipulate victims, which means organizations must leverage equally advanced AI-powered defenses to outpace these emerging threats. Our research reinforces the importance of adopting a proactive, multi-layered approach combining robust zero trust architecture with advanced AI-driven phishing prevention to effectively combat the rapidly evolving threat landscape”.
India moved from its third position to second globally compared to last year, overtaking the UK in phishing attack volumes, and retained its place as the number one target in Asia Pacific & Japan (APJ). In total, over 80 million phishing attempts were detected in India during 2024, accounting for about 33% of the region’s total phishing volume.
The technology sector experienced the highest number of phishing attacks in 2024, recording over 24.5 million incidents. This was followed closely by the services sector, which faced more than 19.1 million phishing attempts. The manufacturing sector ranked third with approximately 17.7 million attacks, while the finance and insurance sector encountered nearly 11.9 million.
While India ranked second globally among phishing targets, it was also the ninth-largest country of origin for phishing attacks.
“India’s digital acceleration has made it a ripe target for attackers leveraging advanced AI tools to deceive users and compromise systems,” said Suvabrata Sinha. CISO-in-residence, India at Zscaler. “The new wave of phishing campaigns is not just opportunistic, it’s calculated, context-aware, and often linguistically tailored to trick even well-trained users. A Zero Trust strategy, reinforced with AI-driven threat detection and containment, is essential to mitigating these highly evolved threats.”
Phishing campaigns are increasingly abusing community-based platforms like Facebook, Telegram, Steam, and Instagram – not only spoofing their brands, but using them to distribute malware, mask C2 communications, gather target intel, and carry out social engineering attacks. Meanwhile, tech support scams, where attackers pose as IT support teams to exploit urgency and safety concerns of victims, remain widespread with 159,148,766 hits in 2024.
Cybercriminals are using GenAI to scale attacks, generate fake websites, and craft deep-fake voice, video, and text for social engineering. New scams mimic AI tools such as resume generators and design platforms tricking users into handing over credentials or payment data.
Critical departments like payroll, finance, and HR are prime targets, along with executives – as they hold the keys to sensitive systems, information, and processes, and can more easily approve fraudulent payments.
Cybercriminals are also creating fake “AI assistant” or “AI agent” websites, falsely offering services such as resume generation, graphic design, workflow automation, and more. As AI tools become increasingly integrated into daily life, attackers are capitalizing on the ease of use and trust around AI to drive unsuspecting users to fraudulent sites.