Skip to content

Deepfake X-rays fool radiologists and AI, exposing healthcare risks

What if the X-ray your doctor relies on is a deepfake? Shocking research shows even experts can't spot AI-generated medical images—putting lives at risk.

The image shows a man looking intently at an x-ray image on a computer screen. He appears to be a...
The image shows a man looking intently at an x-ray image on a computer screen. He appears to be a radiologist, as he is wearing a white lab coat and has a serious expression on his face. The x-rays are displayed on the screen in front of him, and he is focused on the task at hand.

Deepfake X-rays fool radiologists and AI, exposing healthcare risks

A new international study has uncovered a troubling weakness in medical imaging. Radiologists and advanced AI models struggle to tell real X-rays apart from deepfakes. The findings raise concerns about fraud, cybersecurity, and patient safety in healthcare.

The research involved 17 radiologists from six countries, testing their ability to spot AI-generated images. Even when alerted to the possibility of fakes, their detection rates remained worryingly low.

The study used deepfake X-rays created by two AI models, ChatGPT and RoentGen. Researchers did not disclose how many countries contributed to developing these models. The images often showed unnatural symmetry, with bones appearing too smooth, spines unusually straight, and fractures suspiciously clean.

Seventeen radiologists were tested, including musculoskeletal specialists, who performed slightly better than others. Yet overall, only 41% noticed anything odd about the fakes when unaware of their nature. Even when warned, their accuracy barely reached 75%. Experience made little difference—seasoned radiologists fared no better than their less-experienced colleagues. The study also compared AI detection tools, finding that GPT-4, the model behind some deepfakes, could not reliably spot its own fabrications. It still outperformed Google's Gemini and Meta's Llama models, however. To combat the threat, researchers proposed digital safeguards like invisible watermarks and cryptographic signatures. These measures could help verify the authenticity of medical images in the future.

The results highlight a critical gap in healthcare security. Both human experts and AI systems fail to consistently identify deepfake X-rays. Without stronger protections, the risk of fraudulent litigation, misdiagnoses, and cyberattacks in medicine remains high.

Read also: