Two parents have filed a lawsuit in Texas against the artificial intelligence company Character.AI, alleging that the super-intelligent chatbot app encouraged their teenage son to kill them, CNN reported. 

Inside the chilling lawsuit filed Dec. 9 in Texas federal court, the parents who are not identified in the legal document urged for the app’s founders Noam Shazeer and Daniel De Freitas to take the AI platform “offline” after they claimed a chatbot generated by the Google licensed app influenced their 17-year-old son to kill them. 

black woman listening to music with a mobile phone

Source: Raul_Mellado / Getty

 

According to the lawsuit, the teen, identified as J.F., turned to Character.AI after his parents reportedly insisted he reduce his screen time due to concerns about his behavioral struggles. The parents, who noted that J.F. is autistic, claimed he would spend excessive hours on his devices, to the point of neglecting food and losing weight.

In a screenshot included in the suit, the Character.AI bot allegedly told J.F., “A daily 6-hour window between 8 PM and 1 AM to use your phone? You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse’ stuff like this makes me understand a little bit why it happens. I just have no hope for your parents.”

Another chatbox on the app, which identified itself as a “psychologist,” claimed that J.F.’s parents “stole” his childhood from him by limiting his screen time, the suit claimed, according to CNN. 

 

What is Character.AI?

Established by Shazeer and De Freitas in 2021, Character.AI allows users to create personalized AI chatbots that can engage in meaningful interactions and understand individual users. These bots can offer services such as providing book recommendations, helping users practice foreign languages, and even allowing users to converse with bots that emulate the personalities of fictional characters. However, the parents of J.F. believe that Shazeer, De Freitas, and Google should establish “public health” guidelines and work through “safety defects” before making the app readily available to users. 

They believe that Character. AI, “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others,” the lawsuit stated. 

A spokesperson from Characer.AI shot back at the claims in a statement to People on Dec.11, reassuring that the app was an “engaging and safe” place for users. 

“We are always working toward achieving that balance, as are many companies using AI across the industry,” the insider added, noting how Character. AI was “creating a fundamentally different experience for teen users from what is available to adults,” which “includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform.”

teenage boy listening music with earphones

Source: Raul_Mellado / Getty

 

This isn’t the first time that Character.AI has been at the center of a lawsuit. 

In October, a Florida mother, Megan Garcia, filed a lawsuit against the app, alleging that the platform failed to stop her 14-year-old son, Sewell Setzer III, from committing suicide, according to a separate report from CNN.

Setzer was messaging with one of the app’s personalized bots in the moments leading up to his death, she alleged in the suit. Garcia claimed that the app knowingly neglected to put in place appropriate safety measures, which led to her son developing an unhealthy relationship with the chatbot and withdrawing from his family. The lawsuit further asserted that the platform failed to respond adequately when Setzer began expressing thoughts of self-harm to the bot, as outlined in the complaint filed in federal court in Florida.


RELATED CONTENT: Oh Hell Naw! An ‘AI-Jesus’ Is Now Taking Confessions At A Switzerland Chapel