Skip to content

AI tools expose hidden software flaws, sparking a cybersecurity patching crisis

From NHS lockdowns to unproven AI claims, the race to fix AI-discovered vulnerabilities is reshaping cybersecurity. Are defences ready for the storm?

The image shows a group of red and black boxelder bugs on the ground, with a blurred background.
The image shows a group of red and black boxelder bugs on the ground, with a blurred background.

AI tools expose hidden software flaws, sparking a cybersecurity patching crisis

Cybersecurity teams are facing new challenges as large language models uncover hidden software vulnerabilities. The UK’s National Cyber Security Centre (NCSC) has warned of an upcoming 'patch wave'—a surge in fixes needed for flaws found by AI tools. Meanwhile, Anthropic’s latest model, Mythos, is said to detect bugs, though its capabilities remain unproven in real-world testing. The NCSC’s warning comes as AI-driven code analysis reveals weaknesses in widely used software. Their guidance, however, offers little concrete help for teams managing complex systems. Instead, it suggests prioritising internet-facing vulnerabilities before addressing cloud and on-premise assets.

The NHS has already taken steps to reduce risk by instructing developers to set GitHub repositories to private. This move aims to limit exposure to AI-powered code scanning tools that could exploit open-source projects. Anthropic’s Mythos model is rumoured to have bug-finding abilities, but its official documentation lists no confirmed vulnerabilities, severity levels, or disclosure timelines. The company has previously made bold claims without clear evidence, raising questions about the model’s practical impact. To support cybersecurity efforts, Anthropic is providing credits for teams to use on its hosted models. The situation highlights a broader issue: much of the world’s critical infrastructure relies on open-source software, often maintained by underfunded teams. If AI tools accelerate vulnerability discovery, many organisations may struggle to keep up with patching demands.

The NCSC’s warning and Anthropic’s investments signal a shift in how software vulnerabilities are identified. Companies will need to adapt quickly, focusing first on exposed systems before tackling deeper infrastructure risks. Without better resources and clearer guidance, the growing number of AI-discovered flaws could overwhelm cybersecurity defences.

Read also: