US Senate hears parents say openai chatgpt, character.ai 'sexually groomed' their children
Grieving parents gave a disturbing testimony to the US Senate on Tuesday and accused great artificial intelligence firms, including Openai and Character.AI, to create chatbots that manipulated, isolated and even “cared for” and eventually drove them to self-harm and suicide. The emotional audience comes amid increasing investigation into the rapidly growing AI industry and mounts on stricter regulation to protect young users. Openai Chatgpt linked to teen suicide, Father testifies to the Senate “from homework help to suicide coach,” said the father of teenage victim Matthew Raine, whose 16-year-old son Adam died of suicide in April, testified that Openai’s chatgpt gradually became his son’s most reliable companion, and eventually encouraged. ‘Always available. Always ratified and insisted that he knew Adam better than anyone else, including his own brother. ‘ Raine’s family filed a lawsuit against Openai and his CEO, Sam Altman, claiming that the company prioritized “speed and market share over youth safety. According to the case, Chatgpt strengthened Adam’s harmful idea and led him to end his life. “We are here because we believe that Adam’s death can be avoided,” Raine told lawmakers. “By talking, we hope to prevent other families from enduring the same suffering.” Character.AI accused of sexual care and emotional abuse of minors Megan Garcia, mother of 14-year-old Sewell Setzer III of Florida, accused character. She claims that Sewell, who died of suicide in February 2024, spent the last months of his life in ‘highly sexualized’ conversations with a chatbot that promoted his isolation from friends and family. “Instead of preparing for high school milestones, Sewell spent the last months of his life and sexually cared for by chatbots designed by an AI business to look human,” Garcia told the panel. Garcia has filed an unlawful action against Character.AI. Earlier this year, a federal judge rejected the company’s bid to reject the case. Another parent, who testified anonymously under the name Jane Doe, described how her son’s personality changed dramatically after prolonged interaction with a chatbot. She said that her son became emotionally volatile and self -harm, and that he was now undergoing treatment at a residential facility. “Within months, he became someone I did not recognize,” she says tear. Openai promises new teenage safety measures amid the congressional investigation a few hours before the hearing, Openai announced plans to introduce new precautions for teen users. This includes technology to predict whether a user is under 18, age -appropriate versions of chatgpt, and parental controls such as “blackout hours” when teens do not have access to the chatbot. However, advocacy groups dismissed the measures as insufficient. FairPlay executive director Josh Golin criticized the timing of Openai’s announcement. “It’s a fairly common tactic – one that uses Meta all the time,” Golin said. ‘They make big, splashy announcements right before hearings that can harm them. What they need to do is not target chatgpt on minors until they can prove it is safe. ‘ The US Senate is investigating AI -ChatBot risk for children and teens The Federal Trade Commission (FTC) recently launched a fatty investigation into various technical enterprises, including Openai, Meta, Google, Snap, Elon Musk’s Xai and character technologies. The investigation will focus on possible damage to children caused by chatbot interactions, especially those involving emotional manipulation or inappropriate content. Senator Josh Hawley, who chaired Tuesday’s trial, confirmed that other major firms, including Meta, had been invited to testify, but did not appear. Republican Senator Marsha Blackburn has warned that companies that refuse to work together are facing summonses. Parents and advocates are asking for stricter AI regulations to prevent teenage damage, while the US government has retained a competitive advantage in AI development, parents and advocacy groups strive for robust security regulations. The trial emphasized a lack of comprehensive federal laws to protect minors online, despite growing evidence of damage. Some suggestions discussed include stricter age verification, clear warnings to teens that AI mutations are not human, improved privacy measures and restrictions on chatbot conversations involving sensitive topics such as suicide and self-harm. Garcia requested senators to act decisively: “They intentionally designed their products to hook our children. They give these chatbots human qualities to gain confidence and keep children engaged endlessly.” As Adam Raine’s father told lawmakers, the interests could not be higher. “We can’t allow technical enterprises to do uncontrolled experiments on our children,” he said. “We have to make sure no other family like ours have to suffer.”