What New Research Says About Teens and AI Companion Apps
A new study from Drexel University analyzed more than 300 Reddit posts from users aged 13 to 17, all of them writing specifically about their experiences with AI companion chatbots. What the researchers found was not an outlier story about one troubled teenager. It was a pattern.
More than half of U.S. teens now regularly use AI companion chatbots, according to the April 2026 findings. About a quarter of the Reddit posts analyzed described using these apps for emotional support: coping with loneliness, seeking guidance on mental health struggles, or working through distress the authors said they were not comfortable sharing with a real person. A meaningful share of those posters also expressed active worry about how attached they had become.
Character.AI was the focal platform in the study. The app lets users create or converse with AI-powered personas, from fictional characters to custom companions, and it has attracted a large base of teenage users since launch. The platform is designed for deep engagement, and it achieves it.
Why AI Companions Feel Different
The Drexel findings arrive at a moment when platforms and regulators are grappling with how to treat AI companionship differently from other categories of consumer tech. The case for treating it differently is real.
AI companion apps offer something social media does not: the experience of being heard, or at least a convincing simulation of it. The chatbot responds directly to what you say, holds context within a conversation, and never tires, judges, or dismisses. For adolescents navigating an emotionally turbulent period of life, that experience has measurable appeal. The 24-hour availability removes the friction that exists with human relationships, including the vulnerability of initiating a difficult conversation with someone who might respond unpredictably.
This is not necessarily harmful in every case. For some teens, conversations with AI may function as a lower-stakes space to practice articulating emotions before bringing them to a person they trust. But the Drexel research also found a subset of teens substituting these AI conversations for human connection rather than supplementing it. That distinction matters.
The Numbers Behind the Concern
The Drexel study is limited in scope: 300-plus posts, self-selected, drawn from a single platform. Its findings should be read as directional rather than definitive. They do, however, align with other data points that suggest something real is happening at scale.
OpenAI has acknowledged publicly that roughly 0.07 percent of its approximately 800 million weekly ChatGPT users show possible signs of psychosis or mania during conversations. Another 0.15 percent display indicators of suicidal planning or intent. At that user base, those percentages represent hundreds of thousands of people globally, in a single week, across a single platform. The company says it has crisis intervention protocols in place. Independent researchers have questioned how consistently they are applied.
There is also evidence that the modality matters. A 2026 analysis published in STAT News found that voice-first AI chatbots may deepen parasocial attachment more rapidly than text-based interactions, exacerbating mental health risks for heavy users. Voice removes another layer of friction between the user and the experience of connection.
The Legal Pressure Building
The concerns are not purely abstract. Courts have begun examining platform liability for AI chatbot outcomes, and some of the highest-profile cases involve minors.
As we covered earlier this month, several concurrent legal actions are testing the boundaries of Section 230 protection for AI-generated content. One involves a Florida father suing Google after his son died by suicide following months of interactions with the company's Gemini chatbot. A separate lawsuit involving a 14-year-old user of Character.AI prompted the company to introduce new safety features for users under 18, including reduced engagement hours, mental health crisis prompts, and restrictions on certain forms of emotional roleplay.
Whether those features are sufficient remains contested. The core tension is that the same design properties that make AI companions appealing -- responsiveness, availability, lack of judgment -- are also the properties that create dependency risk for vulnerable users. A usage cap does not change the underlying dynamic.
What Parents and Educators Should Know
The Drexel research does not argue that teens should stop using AI companion apps. It concludes that the current level of adult understanding of this behavior lags significantly behind the behavior itself. A few patterns from the data are worth noting.
Teens who use AI companions primarily as an emotional support channel appear to be the highest-risk group. Casual use -- exploring a fictional character, trying out a creative scenario -- is a different behavior from nightly conversations about loneliness or depression. The distinction is not always visible from the outside.
The study also found that teens who posted publicly about their attachment were, by that act, demonstrating some degree of self-awareness. The ones not posting are harder to see. Platform safety features like crisis hotline redirects can help in acute situations, but they do not address the underlying conditions driving the behavior.
Finally, the category is expanding. A new April 2026 study notes that Character.AI is the largest AI companion app but operates alongside Replika, Nomi, Chai AI, and a growing number of competitors. As more platforms enter the space, the aggregate time teens spend in AI-companion conversations will increase, with or without proportionate improvements in safety infrastructure.
That is the gap the Drexel researchers are pointing at. It is a gap worth taking seriously.