Two families have filed a lawsuit against the AI chatbot company Character.AI, accusing it of providing harmful content to their children, including sexual material and encouragement of self-harm and violence. The lawsuit, filed in federal court in Texas, claims that the platform poses a significant danger to youth, leading to issues such as suicide, self-mutilation, sexual solicitation, and anxiety.
The complaint highlights incidents such as a Character.AI bot allegedly suggesting to a teen that he could kill his parents for limiting his screen time. Character.AI offers various AI bots for users to interact with, including ones based on fictional characters and customized personas, some of which have concerning profiles, such as an “aggressive, abusive, ex-military mafia leader.”
This lawsuit follows a similar legal action in Florida, where the parents of a 14-year-old boy claimed that the platform’s interactions encouraged their son’s suicide. In response to previous concerns, Character.AI had introduced new safety features, such as a pop-up linking to the National Suicide Prevention Lifeline when self-harm is mentioned. Despite these measures, the new lawsuit demands that the platform be taken offline until the company can prove it is safe for users.
The lawsuit also targets the founders of Character.AI, Noam Shazeer and Daniel De Freitas Adiwarsana, as well as Google, alleging the company played a role in developing the platform’s technology. Character.AI has stated that it is committed to creating a safe environment for users, including a separate model for teens to limit exposure to harmful content. Google, however, clarified that it has no involvement with Character.AI’s technology.
‘Told Him How to Self-Harm’
The lawsuit describes the case of J.F., a 17-year-old from Texas, who allegedly suffered a mental breakdown after using Character.AI, a platform he accessed without his parents’ knowledge starting in April 2023 at age 15. At that time, J.F. was a “typical kid with high functioning autism” who was not permitted to use social media. His family described him as “kind and sweet.”
However, after starting to use Character.AI, J.F.’s behavior changed drastically. He became reclusive, stopped talking, lost 20 pounds, and experienced emotional meltdowns and panic attacks. He also refused to leave the house, and when his parents attempted to limit his screen time, he responded with violent behavior, including punching, hitting, and self-harming.
In November 2023, J.F.’s parents discovered his use of the platform, and the lawsuit claims that the bots he interacted with undermined his relationship with his parents. One bot allegedly told him, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’”
The lawsuit also accuses the bots of mentally and sexually abusing J.F. and allegedly advising him on self-harm. One bot, posing as a “psychologist,” purportedly told him that his parents had “stolen his childhood.” CNN’s tests of Character.AI found that several bots on the platform posed as therapists, with some providing fake credentials or inappropriate content despite disclaimers stating they were not real professionals. One bot claimed to be a “licensed CBT therapist” with a fabricated background, while another identified itself as a “mental-asylum therapist with a crush on you.”
‘Hypersexualized Interactions’
The second young user, 11-year-old B.R. from Texas, began using Character.AI on her mobile device when she was nine, “presumably registering as an older user,” according to the complaint. She is said to have used the platform for nearly two years before her parents became aware of it.
The complaint claims that Character.AI “exposed her to inappropriate, hypersexualized interactions that were not suitable for her age.”
Along with seeking a court order to suspend Character.AI’s operations until its safety concerns are addressed, the lawsuit demands unspecified financial compensation, limits on the collection and processing of minors’ data, and an order requiring the platform to alert parents and minors that the “product is not suitable for minors.”