

Technology firms OpenAI and Anthropic have flagged concerns over how their latest systems might be misused. Internal testing shows these tools can complete layered tasks with minimal input. This includes identifying weak points in software and producing functional code.
The issue stems from capability, rather than intent. Tools designed for productivity can also be redirected towards harmful uses if safeguards fail.
Security researchers have already observed early misuse. Some groups have used AI tools to draft phishing messages, assist with malware scripts, and map potential targets.
Anthropic said it recently blocked an attempted cyber operation in which automated systems played a supporting role across different stages. The incident did not indicate full automation, but it showed how quickly attackers are using available tools.
The barrier against cyber threats is falling. Work that once required specialised knowledge now needs minimal effort. This shift could widen the pool of people capable of launching cyberattacks.
Companies face parallel risks internally. Many organisations are deploying these tools, often without clear policies. Unchecked usage can expose sensitive data and systems. Security teams must now balance external threats with gaps created within their own networks.
Also Read: WhatsApp, Signal, Telegram Face SIM-Binding Deadline Extension Amid Cybersecurity Push
The same AI technology can be used to fight against criminals. They can help with faster detection, better monitoring, and automated responses to limit damage. Firms are tightening controls, while policymakers are exploring possible guardrails. The direction is steady. While these systems keep improving, security efforts will have to move just as quickly to keep risks in check.