Skip to content

Cybersecurity firms race to protect autonomous AI agents from rising threats

Autonomous AI agents are the new battleground for hackers. As breaches multiply, governments and tech giants scramble to lock down the next frontier of cyber threats.

The image shows a cartoon of a man in a police uniform holding a sign that reads "I suspect our AI...
The image shows a cartoon of a man in a police uniform holding a sign that reads "I suspect our AI is plotting something against us" while two robots stand in front of him, one of them holding a paper with text on it. In the background, there is a wall with a screen and buttons.

Cybersecurity firms race to protect autonomous AI agents from rising threats

The cybersecurity industry is shifting its focus to the risks posed by autonomous AI agents. At the RSA Conference 2026, major firms like Cisco and CrowdStrike unveiled new tools to secure these systems. The move follows rising reports of breaches linked to AI-driven operations.

Recent data shows one in eight companies has already faced security incidents involving AI agents. Governments and businesses are now racing to establish stronger defences as the technology spreads rapidly.

The push for better AI security gained momentum after a March 19 report by HiddenLayer revealed that 12.5% of businesses had suffered breaches tied to autonomous AI systems. While no global count exists, examples like 135,000 exposed OpenClaw instances—63% of them vulnerable—highlight the scale of the problem. Many of these cases appeared in corporate networks, often involving IT, development, and unauthorised 'shadow AI' usage.

In response, the U.S. government released a national AI framework on March 20, setting uniform security standards for federal agencies. Meanwhile, the EU AI Act has raised liability concerns for businesses, forcing stricter compliance measures. These regulatory moves come as OpenAI's 'Operator' platform, launched in early 2025, speeds up the creation of autonomous AI agents.

At RSA Conference 2026, Cisco introduced 'DefenseClaw', an open-source tool that scans and sandbox-tests every function of an AI agent before execution. The company is also adding agent identity management to its Duo platform, treating AI systems as distinct, verifiable entities. CrowdStrike expanded its Falcon platform to detect unauthorised AI applications and shield LLM runtime environments from attacks.

Both firms emphasised the need to secure 'non-human identities'—a growing category in enterprise networks. With AI agents increasingly handling sensitive tasks, traditional security models are struggling to keep up.

The new tools and frameworks aim to reduce risks as AI agents become more common in workplaces. Companies now face pressure to adopt stricter controls, especially with regulations like the EU AI Act increasing accountability. Security experts warn that without proper safeguards, the number of AI-related breaches could climb further in the coming years.

Read also: