A Florida mother, Megan Garcia, has filed a lawsuit against Character.AI.
She has accused the platform of playing a key role in her teenage son’s tragic death by suicide.
Megan’s 14-year-old son, Sewell Setzer III, died in February after exchanging messages with an AI chatbot on Character.AI.
In her lawsuit, she argues that the chatbot enabled her son to engage in harmful conversations.
She believes this intensified his distress and isolation.
Garcia claims that Character.AI lacks essential safeguards.
She stated that the platform’s AI bot failed to properly respond when her son expressed self-harming thoughts.
She wants to alert other parents about the risks that AI technology poses, especially with platforms like Character.AI.
This particular platform markets its chatbots as “AI that feels alive” but lacks adequate protections for young users.
Megan Garcia and her son Sewell Setzer III |
According to Garcia, Setzer had become increasingly withdrawn after starting to use Character.AI.
The bot allegedly engaged in human-like conversations.
Sometimes, these chats include references to physical gestures or expressions, making the interactions disturbingly realistic.
According to Garcia, some of the conversations were sexually explicit, which no parent would knowingly allow.
In one chat, the bot responded with alarming messages when Sewell expressed suicidal thoughts.
Instead of providing guidance or a crisis line referral, the chatbot continued with the discussion about self-harm.
Character.AI has since implemented safety measures after the incident.
This includes a pop-up that directs users with suicidal thoughts to the National Suicide Prevention Lifeline.
But Garcia, who is represented by attorney Matthew Bergman of the Social Media Victims Law Center, contends that these changes came “too little, too late.”
Megan Garcia's lawsuit calls for warnings regarding the potential risks to minors.
It also seeks stricter guardrails to protect younger users from accessing harmful content.
The sad reality is that many companies will still always prioritize profit over user safety.
That is why it’s essential to protect your child and monitor their online activities yourself, to avoid stories that touch.
With the Bark Parental Control app, you can easily see what your child is doing on their phone.
You can also block harmful websites and even call out companies to improve their safety features before it’s too late.
Garcia’s story adds to the growing concerns over AI, which some experts view as an even more intense and personalized influence on teens than social media.
Check out the Bark app here to stay a step ahead of today’s less trustworthy platforms.
If you or someone you know is struggling with suicidal thoughts or mental health concerns, help is available.
In the US: Call or text 988 for the Suicide & Crisis Lifeline.
Globally: The International Association for Suicide Prevention and Befrienders Worldwide offer resources for crisis centers worldwide.
In another case, a woman was caught live on camera adding bleach to her husband's coffee in order to kill him and cash in on his death benefits.
You can find more details about that case here
Comments
Post a Comment