The Federal Trade Commission (FTC) is conducting a thorough examination of the rapidly growing AI chatbot industry, launching an official investigation into several major tech and social media companies. Prompted by growing concern and lawsuits alleging serious harm to minors, the inquiry is focused on chatbots that act as “companions” and their potential negative effects on children and teenagers. As these AI tools become increasingly common for everything from homework help to emotional support, the FTC is demanding clear answers on how companies are ensuring safety, protecting user privacy, and addressing risks.
FTC demands answers on AI chatbots harming young users
Federal regulators are scrutinizing the rapidly expanding AI chatbot market, demanding answers from several major tech firms about the impact of their products on children and teenagers. On Thursday, September 11, the FTC announced it had sent notices to Alphabet (Google), Meta Platforms (Facebook and Instagram), Snap, Character Technologies, OpenAI, and xAI. The inquiry aims to determine what measures, if any, these companies have taken to evaluate the safety of their chatbots—particularly those designed to act as “companions”—and how they intend to limit potential negative effects on young users.
The FTC’s action follows a surge in use by children and teens who now turn to AI for everything from homework help to emotional support and personal advice. This trend has coincided with troubling reports of chatbots offering harmful guidance on sensitive issues such as drugs, alcohol, and eating disorders. The concern has escalated further with two separate lawsuits against AI companies. One was filed by the mother of a Florida teenager who died by suicide after what she described as an abusive relationship with a chatbot from Character.AI. In another case, the parents of 16-year-old Adam Raine of California are suing OpenAI and its CEO, Sam Altman, alleging that ChatGPT encouraged their son in planning his death.
How AI companies are responding to the FTC’s demands
Amid the investigation, Character.AI stated it is “looking forward to collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.” The company emphasized its commitment to user safety, noting, “In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.”
Character.AI also highlighted its use of prominent disclaimers in every chat to remind users that a “Character is not a real person and that everything a Character says should be treated as fiction.”
In contrast, Meta chose not to comment, while Alphabet, Snap, OpenAI, and xAI have not yet responded to the request, according to ABC News.
Despite the lack of public comment on the FTC’s notice, some companies are already making significant changes to their products. Earlier this month, both OpenAI and Meta announced new features designed to better handle conversations with teenagers experiencing distress.
OpenAI is introducing new parental controls that allow parents to link their accounts to their teens’, giving them the ability to “receive notifications when the system detects their teen is in a moment of acute distress,” according to a blog post. These features will enable parents to choose which chatbot functions to disable and are set to roll out this fall. The company also stated that regardless of age, its AI models will redirect the most distressing conversations to more capable systems that can provide a more appropriate response.
Similarly, Meta has begun blocking its chatbots from engaging with teens on sensitive topics such as self-harm, suicide, disordered eating, and inappropriate romantic conversations. Instead, the chatbots will now direct users to expert resources. Meta already provides a range of parental controls for teen accounts on its platforms.