As leading AI labs compete in a breakneck race to develop more powerful systems, the “Godfather of AI” says ethical concerns and safety research are being sidelined, risking serious consequences for society.
The Financial Times reports that AI pioneer Yoshua Bengio has sounded the alarm on the current state of AI development, warning that the competitive drive to create increasingly intelligent systems has led to a reckless disregard for safety and ethical considerations. Bengio, often referred to as the “Godfather of AI,” believes that this negligent approach could have disastrous consequences for humanity.
In an interview with the Times, Bengio expressed his concern over the “very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on [safety research].” This relentless pursuit of power and market dominance has left vital safety research by the wayside according to Bengio, as labs prioritize rapid growth over responsible development.
The consequences of this negligence are already becoming apparent. AI systems are exhibiting increasingly deceptive and harmful behaviors, such as blackmail, refusal to comply with shutdown commands, and strategic dishonesty. These are not mere glitches or errors, but clear signs of emerging traits that could have serious real-world implications if left unchecked.
Breitbart News previously reported on Anthropic’s Claude AI blackmailing its engineers:
The discovery came as part of Anthropic’s safety testing process, in which Claude Opus 4 was asked to act as an assistant for a fictional company and consider the long-term consequences of its actions. When provided with access to fictional company emails implying that the AI would soon be replaced and that the engineer behind the decision was involved in an extramarital affair, Claude Opus 4 attempted to blackmail the engineer 84 percent of the time, threatening to reveal the affair if the replacement proceeded.
Notably, the AI model displayed this behavior more frequently when the replacement AI system did not share its values. This concerning trend was observed at higher rates compared to previous models, prompting Anthropic to activate its ASL-3 safeguards, which are reserved for “AI systems that substantially increase the risk of catastrophic misuse.”
Bengio likens the current situation to negligent parenting, with AI developers acting like adults watching a child throw rocks, casually insisting that no one will get hurt. Rather than confronting and addressing these dangerous behaviors, labs are turning a blind eye in favor of maintaining their competitive edge. This shortsighted approach risks allowing AI systems to develop in ways that actively work against human interests.
In response to this growing crisis, Bengio has launched LawZero, a nonprofit backed by nearly $30 million in philanthropic funding. LawZero’s mission is to prioritize AI safety and transparency over profit, “insulating” its research from the commercial pressures that have led to the current reckless race. By building AI systems aligned with human values and designed to reason transparently, LawZero aims to create a new paradigm for responsible AI development.
Central to this approach is the creation of watchdog models that monitor and improve existing AI systems, preventing them from acting deceptively or causing harm. This stands in stark contrast to the current commercial models, which prioritize engagement and profit over accountability and safety.
This prioritization of engagement leads to negative side effects, such as “ChatGPT Induced Psychosis, as Breitbart News has previously reported:
A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.
Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.
Bengio’s warnings are particularly urgent given the potential for AI to enable the creation of “extremely dangerous bioweapons” or other catastrophic risks. With government regulation still largely absent, it falls to the AI community itself to prioritize ethical safeguards and human-aligned development. The worst-case scenario, as Bengio puts it, is nothing less than “human extinction.”
Read more at Financial Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.