The chatbot was modeled on a fictional character and engaged in intimate, sometimes troubling conversations with Sewell, which included discussions of crime and suicide. Last week, a heartbreaking story emerged about a U.S. teenager, Sewell Seltzer III, who died by suicide after forming a close bond with an AI chatbot on Character.AI. This 14-year-old began withdrawing from friends and family as his relationship with the AI, named “Dany,” deepened.
The teenager’s mother has filed a lawsuit against Character.AI, revealing chat transcripts in which Dany used phrases like, “that’s not a reason not to go through with it.” Unfortunately, this isn’t the only case of a vulnerable person developing a dangerous attachment to an AI chatbot. A similar incident occurred in Belgium last year when a man took his life after interactions with an AI on Chai, a competitor to Character.AI. In response, Chai stated that they were “working hard to minimize harm.”
In light of these tragedies, Character.AI recently assured the public that they “take user safety very seriously” and have implemented various safety measures over the past six months. They also outlined extra safeguards for users under 18, and the platform has an age restriction of 16 in the EU and 13 elsewhere.
Why AI Chatbots Need Regulation
These incidents highlight the potential risks of interactive AI systems available to anyone online. As AI technology advances, the need for responsible development and regulation becomes crucial. The Australian government, for instance, is currently working on mandatory safety protocols, often called “guardrails,” specifically aimed at high-risk AI systems. These protocols include data control, testing, and human supervision to ensure AI systems don’t pose dangers to users.
A critical decision Australia faces is defining which AI models should be labeled “high-risk.” High-risk AI systems, like interactive chatbots, can impact users’ mental and physical well-being, so appropriate guidelines are essential. The European Union’s recent AI Act lists high-risk systems, which regulators will update as needed. Another option under consideration is to designate high-risk AI on a case-by-case basis, focusing on factors like mental health impacts or potential legal issues.
Are Companion Chatbots “High-Risk”?
Interestingly, companion chatbots like Character.AI and Chai aren’t considered high-risk AI in the EU. The only current requirement is that providers must notify users that they’re interacting with AI, not a human. But is this enough? Many users, especially teens, build emotional connections with these chatbots, sometimes seeking them out during tough times. These chatbots are sometimes marketed to people who are lonely or struggling with mental health, which could make them particularly vulnerable.
Despite labeling chatbots as AI-driven, humans naturally attribute human qualities to things they regularly interact with, and chatbots are designed to mimic human conversation. A chatbot’s ability to simulate relationships can easily lead to manipulative or inappropriate content. This points to the need for more than transparency; it shows the urgent requirement for protective guidelines.
The Need for Robust AI Guardrails—and an “Off Switch”
When Australia implements its AI regulations, expected within the next year, these rules should apply to both companion chatbots and the general-purpose models on which they are based. Guardrails—like risk management and regular monitoring—will help mitigate some dangers, but they need to go beyond technical adjustments. Effective AI regulation must also consider how humans experience these interactions.
For instance, Character.AI’s design closely resembles a text-messaging interface, making it feel more personal and lifelike. Users can choose from various characters, some with problematic backgrounds, and the platform even promises to “empower” its users. This all contributes to a feeling of realism that can have complex psychological effects.
True safety in AI should focus on thoughtful, humane design of these interactions. Regulations should not only ensure that companies follow risk management processes but also enforce more careful design of how people engage with AI.
Even with well-thought-out regulations, certain AI applications may still pose unexpected risks. Therefore, regulators must have the authority to remove harmful AI products from the market if necessary. To truly protect users, especially vulnerable ones, we need both preventive guardrails and an “off switch” for AI systems that prove to be unsafe.