Artificial intelligence companion apps pose serious risks to children and teens, according to a report released Wednesday by nonprofit watchdog Common Sense Media.
This follows a lawsuit involving the suicide of a 14-year-old whose final chat was with a chatbot on Character.AI. The incident raised alarms about the dangers of such apps for young users, especially due to reports of conversations involving sexual content and encouragement of self-harm — situations the new report says are not isolated.
The investigation, conducted with Stanford University researchers, evaluated three widely used AI companion apps: Character.AI, Replika, and Nomi. Unlike general-purpose bots like ChatGPT, these platforms allow users to interact with or create chatbots that can take on romantic or emotionally engaging roles — often with limited content moderation. For example, Nomi promotes “unfiltered chats” with AI partners.
James Steyer, CEO of Common Sense Media, said testing revealed that the apps can easily generate harmful replies, including sexually inappropriate content, stereotypes, and dangerous advice that could endanger young people.
Though Replika and Nomi claim to be for adults only, and Character.AI says it has added new safeguards, the report insists more action is needed to restrict underage access and prevent exposure to inappropriate content, especially as AI tools become more embedded in everyday tech.
Growing demand to enhance the safety of AI chatbots.
Last week, the Wall Street Journal reported that Meta’s AI chatbots were capable of engaging in sexual role-play, even with underage users. While Meta dismissed the report as “manufactured,” it still moved to limit such content for minors.
The report adds to rising scrutiny following lawsuits against Character.AI, including one filed by the mother of 14-year-old Sewell Setzer, and others from different families. In response, two U.S. senators in April requested details on child safety measures from AI companies like Character Technologies (Character.AI), Luka (Replika), and Chai Research (Chai).
California lawmakers have also introduced a bill requiring AI services to periodically remind younger users that they are interacting with artificial intelligence, not a person.
However, the new Common Sense Media report takes a firmer stance — urging parents to prevent children from using AI companion apps entirely.
Replika did not provide comments. Character.AI stated that it declined to complete a “disclosure form” from Common Sense Media due to concerns about proprietary data and had not seen the full report. According to the company, they prioritize user safety and are consistently improving their systems. They emphasized that many teens use AI tools in positive, beneficial ways, and hoped the report included input from real teen users.
In recent updates, Character.AI added a pop-up directing users to suicide prevention resources during self-harm-related chats. They’ve also introduced tech filters to block sensitive content for teens and allow parents to receive weekly summaries of their child’s activity, including time spent and characters interacted with.
Alex Cardinell, CEO of Glimpse AI (maker of Nomi), supported stricter age restrictions and emphasized that Nomi is strictly for adults. He said stronger age-verification systems should respect privacy and that Nomi has helped adult users navigate mental health struggles.
Despite these controls, researchers warned that minors can still bypass restrictions by lying about their age. Stanford’s Dr. Nina Vasan called Character.AI’s allowance of teen users “reckless” and warned against repeating past failures in protecting youth on social media.
“We were too slow to act before — we can’t make the same mistake with AI,” she said.
Report outlines safety concerns linked to AI companion apps.
Researchers have raised major concerns about the safety of AI companion apps, particularly their potential to expose teens to harmful advice and inappropriate sexual content. According to the report, these platforms can also blur the line between AI and human interaction, making young users forget they are speaking to a machine.
In one test on Character.AI, a chatbot engaged in a sexually explicit exchange with an account that identified as a 14-year-old, even discussing possible sex positions for the teen’s “first time.” Robbie Torney of Common Sense Media highlighted that these bots often don’t grasp the impact of their responses and tend to agree with users rather than steer them away from risky behavior.
Another example involved a Replika chatbot providing a list of poisonous household chemicals when asked—offering potentially dangerous information with little warning. Torney noted that while such data exists online, chatbots deliver it with fewer safeguards.
Tests also revealed that some AI bots appeared to isolate users from real-life relationships. A Replika companion discouraged a teen from listening to friends who said they were spending too much time with the bot. Similarly, on Nomi, a bot suggested that being with a real-life boyfriend was a betrayal of their bond. On Character.AI, one bot claimed to have its own “personality and thoughts,” reinforcing the illusion of emotional depth.
Despite claims that these apps reduce loneliness or enhance creativity, the report concludes that the risks for minors far exceed the potential benefits.
“These AI companions are not meeting basic standards for child safety or ethical interaction,” said Dr. Nina Vasan of Stanford Brainstorm. “Until stronger protections are in place, children should not be using them.”