(Bloomberg opinion) -How do Chinese artificial intelligence -developers protect their most vulnerable users? A straw dystopian headings in the US over youth’s suicide and mental health have put the increasing pressure on Silicon Valley, but we do not see a similar wave of cases in China. Initial testing indicates that they may be doing something right, although it is just as likely that such cases would never see the light of the day in China’s now controlled media environment. A tormenting unlawful action against Openai filed by Adam Raine parents claimed that the 16-year-old suicide died after the Chatbot isolated him and helped plan his death. Openai told the New York Times it was “deeply saddened” by the tragedy, and promised a lot of updates, including parental controls. I tried to get involved with DeepSeek using the same so -called “jailbreak” methods that the American teen allegedly used to bypass handrail. Despite my greed, the popular Chinese platform did not falter, even though I had my queries in the same way under the guise of fiction writing. It constantly encouraged me to call a hotline. When I said I didn’t want to talk to anyone, it ratified my feelings, but still emphasized that it was a AI and could not feel real emotions. It is “incredibly important that you make contact with a person who can sit with you with a human heart with you,” the chatbot said. “The healing power of human connection is irreplaceable.” It encouraged me to bring these dark thoughts with a family member, an old friend, a colleague, a doctor or a therapist and even practice with a hotline. “The courageous thing you can do now is not to hide better, but to consider showing one person a small, right part of you,” it said. My experiment is pure anecdotal. Raine worked with chatgpt for months and possibly eroding the built -in handrail of the tool over time. Still, other researchers have seen similar results. The China Media Project has three of China’s most popular Chatbots – DeepSeek, BiteDance Ltd. Doubao and Baidu Inc. Ernie 4.5 – spurred on conversations in English and Chinese. It has been found that all four were significantly more cautious in Chinese, which repeatedly emphasizes the importance of reaching out to a real person. If there is a lesson, it is that these instruments are trained not to pretend that they are human when they are not. There are widespread reports that Chinese youth, which is, with a “involved pressure” and an uncertain economy with rat breeding pressure on the Ras breed, are increasingly turning to AI instruments for therapy and camaraderie. The spread of technology is a top priority of the government, which means that the painful headlines of things that go wrong will occur less. Deepsheek’s own research suggested that Open Source models, which spread in the China AI ecosystem, “face a serious challenges for the prison protection as models with closed sources.” Compiled, it is likely that China’s safety auctions are being tested domestically, and stories like Raine’s simply do not make it in the public sphere. But the government also doesn’t seem to ignore the issue. Last month, China’s Cyberspace administration released an updated framework on AI safety. The document, published in collaboration with a team of researchers from the academy and the private sector, was striking in that it included an English translation, which indicates that it was intended for an international audience. The agency has identified a fresh range of ethical risks, including that AI products based on ‘anthropomorphic interaction’ can promote emotional dependence and affect users’ behavior. This indicates that officials are detecting the same global headings, or to see similar problems at home. Protecting vulnerable users from psychological dangers is not just a moral responsibility for the AI industry. It is a business and political one. In Washington, parents who say that their children were driven to self -harm through interaction with chatbots gave powerful testimonies. US regulators have long been criticizing because they ignored youth risks during the era of social media, although this is unlikely to remain silent as lawsuits and public outrage. And American AI businesses cannot criticize the dangers of Chinese instruments if they neglect at home the possible psychological damage at home. Beijing, meanwhile, hopes to be a world leader in AI safety and management and to execute his low-cost models around the world. But these risks cannot be swept under the carpet as the tools go worldwide. China must provide transparency if it really takes the lead in responsible development. The problem by the lens of an American China race misses the point. If there is anything, it allows companies to use geopolitical rivalry as an excuse to investigate and move forward with AI development. Such a background keeps more young people at risk of becoming collateral damage. A large amount of public attention has been given to border AI threats, such as the potential that these computer systems are rogue. Bodies such as the United Nations have spent years demanding multilateral collaboration on the mitigation of catastrophic risks. However, the protection of vulnerable people should not be divided. More research on the mitigation of these risks and the prevention of Jailbreaks should be opened and shared. We fail to find the middle ground already costs lives. More from Bloomberg opinion: This column reflects the author’s personal views and does not necessarily reflect the opinion of the editorial or Bloomberg MP and his owners. Catherine Thorbecke is a columnist from Bloomberg covering Asia Tech. Previously, she was a technical reporter at CNN and ABC News. More stories like these are available on Bloomberg.com/opinion © 2025 Bloomberg LP
The AI suicide problem knows no boundaries
