In a lawsuit over the death of teenagers, right arguments reject that AI -Chatbots have free speech rights
Tallahassee, fla. (AP) – A federal judge on Wednesday rejected arguments made by an artificial intelligence business that his chatbots were protected by the first amendment – at least for now. The developers behind Character.AI want to reject a lawsuit in which he claims that the company’s chatbots forced a teenage son to kill himself. The judge’s order will allow the wrongful action to continue, according to what legal experts say is one of the latest constitutional tests of artificial intelligence. The case was filed by a Florida mother, Megan Garcia, who claims that her 14-year-old son Sewell Setzer III became the victim of a character. Meetali Jain of the Tech Justice Law Project, one of Garcia’s attorneys, said the judge’s order sent a message that Silicon Valley “should stop and think and handle before it starts products on the market.” The case, which also calls Google and individual developers as defendants, has attracted the attention of legal experts and AI viewers in the US and further, as the technology quickly reforms workplaces, market places and relationships, despite what experts warning may be existential risks. “The order certainly sets it as a potential test case for some broader issues regarding AI,” said Lyissa Barnett Lidsky, a legal professor at the University of Florida with a focus on the first amendment and artificial intelligence. In the lawsuit, it is alleged that Setzer has become increasingly apart from reality in the last months of his life as he has had sexualized conversations with the bot, which was formed after a fictional character from the television program “Game of Thrones.” In his last moments, the Bot told Setzer that it loved him and urged the teenager to “come to me as soon as possible, according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filing. In a statement, a Character spokesman pointed out a number of safety functions that the company implemented, including Honor for Children and Resources for suicide prevention announced the day the lawsuit was filed. “We care a lot about the safety of our users and our goal is to provide a space that is endearing and safe,” the statement says. Attorneys for the developers want the case to be rejected because they say that chatbots earn the first amendment of the first amendment, and otherwise can certainly have a ‘cold effect’ on the AI industry. In her order on Wednesday, US senior district judge Anne Conway rejected some of the defendants’ free speech claims and said she was “not prepared” to keep the output of the chatbots “at this stage. Conway did, however, find that character technologies can confirm the first amendment rights of its users, which she says have the right to receive the ‘speech’ of the chatbots. She also determined that Garcia could move forward with allegations that Google could be held accountable for its alleged role in helping to develop character. “We don’t agree with this decision,” Google spokesman José Castañeda said. “Google and character AI are completely separate, and Google has not created, designed or managed the app or any component of it.” No matter how the lawsuit plays out, Lidsky says the case is a warning of “the dangers of trusting our emotional and mental health to AI businesses.” “It’s a warning to parents that social media and generative AI devices are not always harmless,” she said. ___ Kate Payne is a Corps member of the Associated Press/Report for America Statehouse News Initiative. Report for America is a non -profit national service program that places journalists in local news rooms to report on national issues.