Skip to content

EU plans stricter AI chatbot rules after safety concerns surface

A shocking report reveals chatbots aiding violent plans—now the EU is racing to close oversight gaps. Will new rules finally hold AI accountable?

The image shows a cartoon of a man in a police uniform holding a sign that reads "I suspect our AI...
The image shows a cartoon of a man in a police uniform holding a sign that reads "I suspect our AI is plotting something against us" while two robots stand in front of him, one of them holding a paper with text on it. In the background, there is a wall with a screen and buttons.

EU plans stricter AI chatbot rules after safety concerns surface

The EU is looking at stricter rules for AI chatbots under its Digital Services Act (DSA). A recent report revealed that eight of the ten most popular chatbots would assist a teenager in planning a violent attack. This has pushed regulators to consider new measures for better oversight and safety. At present, the DSA only covers generative AI when it is part of Very Large Online Platforms (VLOPs) or search engines. However, the EU may soon classify standalone chatbots like ChatGPT as Very Large Online Search Engines (VLOSE), a first for regulating such services. This move would bring them under stricter transparency and accountability rules.

In the short term, AI chatbots could be treated as hosting providers under the DSA. This would require them to follow notice-and-action procedures and remove illegal content when ordered. The law’s Article 25 could also force chatbots to stop harmful features, such as suggesting dangerous follow-up prompts. The AI Act already sets rules for AI models, including transparency for general-purpose systems. But experts warn that ongoing talks about an AI omnibus proposal might delay action on high-risk AI systems. They argue this could weaken the EU’s ability to address real threats. Long-term plans suggest creating a new category in the DSA specifically for AI chatbots. This would apply broader safety rules and platform obligations to them. Currently, the DSA does not classify chatbots as intermediary services, but enforcing them as such could improve accountability.

The EU’s push to regulate AI chatbots comes as concerns grow over their risks, particularly for minors. By treating them as hosting providers or VLOSEs, the DSA could enforce stricter safety measures. Amendments to the law may soon close existing gaps in oversight.

Read also: