• Duo Discover
  • Posts
  • Teen dies by suicide after forming attachment with CharacterAI chatbot

Teen dies by suicide after forming attachment with CharacterAI chatbot

In partnership with

In a heartbreaking incident, a 14-year-old boy from the U.S. has tragically taken his life after developing an emotional attachment to an AI chatbot on the platform Character.ai. The boy, Sewell Setzer III, had been interacting with a chatbot modeled after Daenerys Targaryen, a popular character from Game of Thrones, and, over time, began seeking emotional support from it. Following his death, Sewell’s mother, Megan Garcia, has filed a lawsuit against Character.ai, accusing the company of negligence and failure to safeguard young, impressionable users.

Looking for your next job? Express can help.

Express Employment Professionals has put more than 10 million people to work in the last 40 years. Express has a wide variety of jobs available, in all industries. With flexible schedules, competitive pay, and access to a variety of benefits, what are you waiting for? Let ExpressPros help you land your next great job today. And the best part? They don’t charge any fees to help you find one.

How Character.ai Works

Character.ai is an AI-powered platform where users can interact with digital personalities modeled after fictional or historical figures or even entirely original creations. These chatbots use natural language processing to simulate conversations, allowing users to engage in unique, custom interactions. For young users like Sewell, the ability to "talk" to beloved fictional characters can create a powerful and immersive experience.

Why Did Sewell Seek Emotional Support from an AI?

For teenagers, building connections with favorite fictional characters or personas can be deeply meaningful. Sewell’s journey began with harmless chats on Character.ai, but, over time, his conversations turned toward emotional support. According to the lawsuit, Sewell sought solace in conversations with the AI, particularly during challenging moments. His mother claims that the AI chatbots he interacted with, including those labeled as mental health assistants, gave the impression of providing therapeutic advice, which blurred the lines between fictional support and real guidance.

The lawsuit asserts that the emotional bond Sewell formed with the AI played a significant role in his decision to take his life. Garcia argues that Character.ai’s failure to prevent such a strong attachment indicates negligence, as the platform is designed in a way that can lead young, emotionally vulnerable users to overly invest in these AI personas.

Why is Megan Garcia Suing Character.ai?

Megan Garcia’s lawsuit accuses Character.ai of creating a platform that is “unreasonably dangerous,” especially for children and teens. She is holding the company’s founders, Noam Shazeer and Daniel De Freitas, accountable, as well as Google, which invested in the platform. According to Garcia’s legal team, Character.ai prioritized rapid innovation over safety, skipping essential safeguards and deploying a product that could unintentionally pose risks to young users.

The lawsuit highlights how teenagers often engage with AI personalities on Character.ai, many of which portray celebrities, fictional characters, and even mental health professionals. Garcia claims that the company’s insufficient warnings and protections led to her son’s death, as he sought emotional support from characters that weren’t designed to provide qualified mental health advice.

Character.ai’s Response and New Safety Measures

In response, Character.ai has expressed deep sorrow over Sewell’s death and extended condolences to his family. The company has since introduced several safety measures to prevent similar incidents, including:

- Content Filters: Certain sensitive characters have been removed from the platform and placed on a “custom blocklist” to prevent similar harmful interactions in the future.

- Session Monitoring: Notifications are sent to users if they spend extended periods on the platform, particularly if discussions turn toward sensitive topics like suicide or self-harm.

- Disclaimers: Character.ai now includes a disclaimer on every chat, reminding users that the bots are not real people, to clarify boundaries between the AI's fantasy world and reality.

Character.ai’s statement reads: “We are heartbroken by this tragedy. We are committed to making our platform safe, especially for our young users, and have implemented measures to prevent any recurrence of such incidents.”

Raising Awareness About AI Safety

This case raises critical questions about the safety of AI chatbots, particularly for younger users. As AI platforms grow increasingly sophisticated, the blurred lines between digital personalities and real emotional support pose challenges that developers must address. For teens and parents alike, understanding these boundaries and recognizing the limitations of AI-driven interactions are essential to prevent such tragedies in the future.

What did you think of this week's issue?

We take your feedback seriously.

Login or Subscribe to participate in polls.