AI and Its Risks: How Should We Handle Them?
AI researchers demand whistleblower protections amid growing safety concerns
A group of AI researchers—including experts from OpenAI, the creator of ChatGPT—are demanding the right to warn the public about the dangers of the technology. Currently, there are no legal frameworks in place to protect them when doing so.
Existing whistleblower protections are insufficient, the experts emphasized in an open letter published on Tuesday. These safeguards primarily target illegal corporate activities, but in the case of artificial intelligence, many risks fall into legal gray areas. "Some of us rightly fear retaliation, as there have already been such cases in the industry."
A striking example emerged shortly afterward: Former OpenAI researcher Leopold Aschenbrenner told the Dwarkesh Podcast that he had been fired after raising concerns about AI safety with the company's board.
The researchers called on companies developing advanced AI models to adopt four key principles. These include refraining from prohibiting employees from criticizing their employers. Recent reports revealed that OpenAI had threatened former staff with the forfeiture of their stock options if they "disparaged" the company. OpenAI CEO Sam Altman later apologized, claiming he had been unaware of the clause, and had it removed. He also insisted it had never been enforced.
Another demand in the letter is the establishment of a process allowing employees to anonymously alert company boards and regulators about potential risks in AI software. They should also have the freedom to go public if no internal channels exist.
The Threat of Autonomous Software
Some AI experts have long warned that the rapid advancement of artificial intelligence could lead to autonomous systems that evade human control. The consequences, they argue, could range from mass disinformation and large-scale job losses to—at the extreme—human extinction. Governments are now working to establish regulations for AI development, with OpenAI, the force behind ChatGPT, seen as a pioneer in the field.
An OpenAI spokesperson responded to the letter by stating that the company supports "a scientific approach to assessing technological risks." Employees, they added, are free to voice concerns—even anonymously—but must not disclose confidential information that could fall into the wrong hands.
Four current and two former OpenAI employees signed the letter anonymously. Among the seven who publicly attached their names, five are ex-OpenAI staff and one is a former employee of Google's DeepMind. Neel Nanda, currently at DeepMind and previously with AI startup Anthropic, clarified that he had encountered nothing at his current or past employers that he felt compelled to warn about.
The Conflict with Altman
In November, Altman was abruptly ousted by OpenAI's board, which cited a loss of trust. Just days later, he was reinstated after widespread employee support and backing from major shareholder Microsoft. Former board member Helen Toner later explained that the board had first learned of ChatGPT's release through the media—a revelation that raised concerns the company might have made the technology publicly available without adequate safety measures.
More recently, OpenAI faced scrutiny after actress Scarlett Johansson questioned why a ChatGPT voice bore a striking resemblance to her own, despite her having declined to provide voice data for the project.
Read also:
- PCOS-related Gas Buildup: Explanation, Control Strategies, and Further Insights
- Astral Lore and Celestial Arrangements: Defining Terms & In-Depth Insights - Historical Accounts & Glossary of Cosmic Mythology
- "Rural Idyls with Supercars: Astonishing Sites Where Residents Cruise McLarens and Ferraris for Groceries"
- Heartache Explained: Understanding Angina