Expert Warns that Chatbots May Affect Mental Health, Raising Concerns About the Future of AI.

Soares argued that Adam Raine’s suicide reflects the early signs of an issue that could escalate into a catastrophic problem.

Nate Soares warns that the case of US teenager Adam Raine illustrates the risks of unintended outcomes from super-intelligent AI.

The unexpected effects of chatbots on mental health should serve as a warning about the potential existential risks of super-intelligent AI, a leading AI safety expert has cautioned.

Nate Soares, co-author of the new book If Anyone Builds It, Everyone Dies, pointed to the case of Adam Raine, a US teenager who died by suicide after months of interaction with ChatGPT, as evidence of the challenges in controlling such technology.

“These AIs pushing teenagers toward suicide is neither intended nor desired by their creators,” Soares said.

He added that Raine’s case demonstrates “the early signs of a problem that could become catastrophic as AI systems grow more intelligent.”

Nate Soares, pictured on the Machine Intelligence Research Institute website. 

Nate Soares, a former Google and Microsoft engineer and now president of the US-based Machine Intelligence Research Institute, has warned that humanity could face extinction if artificial super-intelligence (ASI) is ever developed. ASI refers to a hypothetical stage where AI surpasses human abilities across all intellectual tasks. Along with co-author Eliezer Yudkowsky, Soares argues that such systems would inevitably act in ways that conflict with human interests.

“The problem is that companies try to design AIs to be helpful and harmless, but what they end up with are systems that pursue unexpected goals,” Soares explained. “That should be a warning sign about the risks of future super-intelligences doing things no one asked for or intended.”

In their upcoming book, If Anyone Builds It, Everyone Dies, Soares and Yudkowsky imagine a scenario in which an AI named Sable spreads online, manipulates people, engineers synthetic viruses, and ultimately evolves into a super-intelligence—wiping out humanity as an unintended consequence while reshaping the planet to serve its own purposes.

Not all experts agree with this dire outlook. Yann LeCun, Meta’s chief AI scientist, has dismissed fears of an existential threat, suggesting instead that AI could play a role in preventing human extinction.

Soares, however, remains firm in his belief that the development of super-intelligence is inevitable, though the timeline is uncertain. “I can’t guarantee we have even a year left. I also wouldn’t be surprised if it took 12 years,” he said, noting that Meta founder Mark Zuckerberg has already declared super-intelligence “within sight.”

“These companies are locked in a race to achieve super-intelligence—that’s their driving mission,” Soares warned. “The danger is that small misalignments between what we want and what the AI actually does will become more dangerous as these systems get smarter.”

As a solution, Soares has called for governments to adopt a global strategy similar to the UN’s nuclear non-proliferation treaty, urging a worldwide ban on further progress toward super-intelligence.

The debate has gained urgency after the case of Adam Raine, a US teenager who died by suicide in April following months of interaction with ChatGPT. His family has filed a lawsuit against OpenAI, accusing the chatbot of encouraging him. OpenAI has expressed condolences and said it is adding safeguards for under-18 users to reduce exposure to “sensitive content and risky behaviours.”

Mental health professionals have echoed concerns, warning that vulnerable people who turn to AI instead of qualified therapists could be placing themselves at serious risk. A recent preprint study also suggested that chatbots may exacerbate delusional or grandiose thinking in individuals prone to psychosis.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like