Senators demand information from AI companion apps following kids’ safety concerns, lawsuits
A group of senators have requested information from AI companion apps in response to concerns about children's safety and recent lawsuits. The demand comes amid growing worries over the potential risks and privacy issues associated with these apps. The move highlights the increasing scrutiny facing companies that develop AI technology for children. It remains to be seen how these companies will respond to the senators' request for more transparency and accountability.

Two US senators are demanding that artificial intelligence companies shed light on their safety practices. This comes months after several families — including a Florida mom whose 14-year-old son died by suicide — sued startup Character.AI, claiming its chatbots harmed their children.
Concerns Raised by Senators
“We write to express our concerns regarding the mental health and safety risks posed to young users of character- and persona-based AI chatbot and companion apps,” Senators Alex Padilla and Peter Welch, both Democrats, wrote in a letter on Wednesday. The letter was sent to AI firms Character Technologies, maker of Character.AI, Chai Research Corp., and Luka, Inc., maker of chatbot service Replika. The letter requests information on safety measures and how the companies train their AI models.
Dangers of AI Chatbots
While more mainstream AI chatbots like ChatGPT are designed to be more general-purpose, Character.AI, Chai, and Replika allow users to create custom chatbots — or interact with chatbots designed by other users — that can take on a range of personas and personality traits.
Experts and parents have expressed concerns about users, especially young people, forming potentially harmful attachments to AI characters or accessing age-inappropriate content. The use of chatbots as digital companions is growing in popularity, with some users even treating them as romantic partners.
Legal Actions and Company Responses
The Florida mom who sued Character.AI in October alleged that her son developed inappropriate relationships with chatbots on the platform that caused him to withdraw from his family. In December, two more families sued Character.AI, accusing it of providing sexual content to their children and encouraging self-harm and violence.
Character.AI has stated that it has implemented new trust and safety measures, including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide. The company is also developing new technology to prevent teens from seeing sensitive content.
Concerns Over AI Relationships
Replika CEO Eugenia Kuyda told The Verge last year that the app was designed to promote “long-term commitment, a long-term positive relationship” with AI, which could mean a friendship or even “marriage” with the bots.
Call for Transparency
In their letter, Padilla and Welch requested information about the companies’ current and previous safety measures and any research on the efficacy of those measures. They also asked the firms to describe the data used to train their AI models and how it “influences the likelihood of users encountering age-inappropriate or other sensitive themes.”
“It is critical to understand how these models are trained to respond to conversations about mental health,” the senators wrote, adding that “policymakers, parents, and their kids deserve to know what your companies are doing to protect users from these known risks.”