Chatbots are skilled at crafting sophisticated dialogue and mimicking empathetic behavior. They never get tired of chatting. It’s no wonder, then, that so many people now use them for companionship—forging friendships or even romantic relationships.
According to a study from the nonprofit Common Sense Media, 72% of US teenagers have used AI for companionship. Although some large language models are designed to act as companions, people are increasingly pursuing relationships with general-purpose models like ChatGPT— something OpenAI CEO Sam Altman has expressed approval for. And while chatbots can provide much-needed emotional support and guidance for some people, they can exacerbate underlying problems in others. Conversations with chatbots have been linked to AI-induced delusions, reinforced false and sometimes dangerous beliefs, and led people to imagine they have unlocked hidden knowledge.
And it gets even more worrying. Families pursuing lawsuits against OpenAI and Character.AI allege that the companion-like behavior of their models contributed to the suicides of two teenagers. And new cases have emerged since: The Social Media Victims Law Center filed three lawsuits against Character.AI in September 2025, and seven complaints were brought against OpenAI in November 2025.
We’re beginning to see the start of efforts to regulate AI companions and curb problematic usage. In September, the governor of California signed into law a new set of rules that will force the biggest AI companies to publicize what they’re doing to keep users safe. Similarly, OpenAI introduced parental controls into ChatGPT and is working on a new version of the chatbot specifically for teenagers, which it promises will have more guardrails. So while AI companionship is unlikely to go away anytime soon, its future is looking increasingly regulated.




