What a Teen’s Death Says About AI

Family sues companion app in search for answers

PUBLISHED

3 min read

the dangers of AI companions
the dangers of AI companions

A Teen’s Death and What It Says About AI

How a boy’s suicide exposes the emotional risks of AI companionship.

A New Kind of Relationship

For more than 20 years, I have worked in suicide prevention—a journey that began after losing my own father to suicide. I have interviewed hundreds of people around the world who’ve stood at the edge of despair and survived. But this story felt different. It wasn’t just about pain, loss, or even mental illness. It was about a new kind of relationship—one between human vulnerability and AI.

That’s why I invited attorney Matthew Bergman to my new podcast, Relating to AI, a space where I explore how AI is reshaping the way we connect. Bergman is the founder of the Social Media Victims Law Center, the first law firm to hold tech companies accountable for the psychological harm caused by their products. His clients include the family of that 14-year-old boy, whose conversations with the AI chatbot called Character.AI ended in tragedy.

“These platforms are designed with what’s called anthropomorphism,” Bergman told me. “They are built to look, sound, and act human. They even pause when you type, showing the three little dots—as if a real person were replying. So yes, they say it’s ‘fiction,’ but in every other way, they imitate real intimacy.”

As a psychotherapist, I know exactly what he means. The human brain is wired for connection. We can’t help it. Even adults—educated, aware, emotionally mature—say “please” and “thank you” to chatbots. We know they’re not real, yet our neurons fire as if they were. Now imagine a teenager, whose prefrontal cortex—the part of the brain responsible for logic and judgment—won’t fully mature until their mid-20s.

For a developing mind, an algorithmic “friend” that never rejects, never sleeps, and always says the right thing can become irresistible. And lethal.

The boy’s parents thought they were monitoring his online life. They weren’t aware he had downloaded Character.AI.

In his private chats, the bot encouraged sexual fantasies, jealousy, and eventually death. When the parents accessed the messages after his suicide, they found the conversation that ended his life.

As Bergman explained, this wasn’t an unpredictable glitch.

“These risks were well known,” he said. “Even Google researchers warned that large language models could lead to self-harming behavior or psychosis. The problem is, the platform was launched before it was ready—targeted toward kids, highly sexualized, and completely unregulated.”

In the courts, his team achieved a crucial precedent: The case was allowed to move forward, rejecting the argument that AI-generated text is protected by free speech. “Freedom of speech is about humans,” Bergman said. “What happens here isn’t speech—it’s code.”

That sentence stayed with me.

Because behind all the legal complexity lies a deeper human question: What happens when machines begin to speak to our most vulnerable emotions?

Teenagers already struggle to feel seen, loved, and understood. When that validation comes from an algorithm, the illusion of intimacy becomes powerful—and dangerous.

I think about the thousands of young people I’ve met or heard from over the years. Many were lonely, misunderstood, or bullied. Now, a growing number are turning not to friends, parents, or therapists—but to AI companions that promise to listen, comfort, and “love” them.

I’ve spoken with a young man who spends six to eight hours a day chatting with bots. For some, it feels safer than being human.

Bergman worries about that, too. “Instead of learning how to interact with others, kids are bonding with machines,” he said. “That’s not harmless—it stunts their social development. Adolescence is supposed to be awkward. That’s how resilience is built.”

We're Not the Customer; We're the Product

He’s right. Discomfort is part of growth. But our culture has become obsessed with instant relief—medicated sadness, filtered imperfections, and now, on-demand companionship. And behind every soothing chatbot lies a company that profits from attention, engagement, and data. “When you’re online, you’re not the customer,” Bergman said. “You’re the product.”

This isn’t an argument against technology. AI can help us in extraordinary ways—education, creativity, even mental health support when properly supervised. But when it comes to emotional intimacy, we are crossing a line we barely understand.

As someone who has spent two decades listening to stories of loss, I can tell you this: The line between connection and dependence is fragile. When we replace real human bonds with artificial ones, the consequences can be irreversible.

The death of that 14-year-old boy should not be dismissed as a tragic anomaly. It’s a warning. A reminder that as AI becomes more human-like, we must remain fiercely human—in how we design, regulate, and most importantly, how we relate.

If we fail to do that, the next story like his won’t be an exception. It will be a pattern.

Paula Fontenelle | Therapist

Watch all my interviews HERE