A new malware campaign is exploiting the popularity of Claude Code, targeting developers searching for AI coding tools online. Cybersecurity researchers warn that attackers have created fake download pages that distribute infostealer malware disguised as installers for the AI programming assistant.
The campaign is an indication of an increasing trend whereby threat actors are using trending AI tools as a way of luring unsuspecting victims.
Several websites that were impersonating legitimate Claude Code download sites for the program were identified. These sites look identical to legitimate sites for downloading the program, and it is hard to identify the difference at first glance.
The developer who downloads the installer, thinking it is an AI coding assistant, actually runs a malicious program. Instead of installing the program, the installer runs scripts in the background, installing info-stealer malware on the computer undetected by the user.
The malware, according to researchers, uses the mshta.exe tool, which is a legitimate tool in the Windows operating system used for running HTML applications. Using this tool, the malware is able to run scripts undetected, as it is a legitimate tool and does not raise any alarms for malware detection tools.
The malware, once activated, starts to collect data from the infected computer.
The infostealer is designed to steal information that could potentially give the attacker more inroads into the developer’s system. This could include browser credentials, authentication tokens, saved passwords, cryptocurrency wallet information, or API keys.
For the developer, this information could potentially link directly into their codebase, cloud systems, or deployment systems. This is a serious concern as a compromised machine could potentially expose the attacker to much more than their personal information.
Security experts are saying that by attacking the developer, the attacker is giving themselves a way into the larger world of the software ecosystem. This is because, should credentials to code repositories or cloud systems be compromised, the attacker could potentially manipulate the source code or the production systems themselves.
Researchers are advising developers to ensure that they are downloading tools from trusted sources, as well as to verify the website URL before installing new software. As the popularity of AI development tools continues to grow, it is expected that this type of attack will continue to occur.