California Governor Gavin Newsom has signed the pioneering legislation that requires the largest artificial intelligence companies in the world to disclose their security protocols and publicly report critical incidents, the state lawmakers announced on Monday. Senate Bill 53 is the most important step in California that still regulates the Silicon Valley’s rapidly promoting AI industry, while also retaining its position as a global technical center. “With a technology as transformative as AI, we have the responsibility to support that innovation while giving an edge on the lead,” Senator Scott Wiener, the sponsor of the bill, said in a statement. The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom its previous bill, SB 1047, veto after the raging setback of the technology industry. It also comes after a failed attempt by the Trump administration to prevent states from exporting AI regulations, under the argument that they would create regulatory chaos and delay the US-made innovation in a race with China. According to the new law, large AI businesses must disclose their security and security protocols in the re -elected form to protect intellectual property. They must also report critical safety incidents in including weapons threats, large cyber attacks or loss of model control to civil servants within 15 days. The legislation also establishes the protection of whistleblower for employees who reveal evidence of dangers or offenses. According to Wiener, the approach of California from the landmark -ai law differs from the European Union, which requires private revelations to government agencies. SB 53 Mandate Meanwhile, the disclosure to ensure greater accountability. In what advocates describe as a world-first provision, the law requires businesses to report cases where AI systems are concerned during the testing dangerous misleading behavior. For example, if a AI system lies over the effectiveness of controls designed to prevent it from helping with the construction of biowapon, developers should reveal the incident if it significantly increases catastrophic damage risks. The working group behind the law was led by prominent experts, including Fei-Fei Li, Stanford University, known as the ‘godmother of AI’. ARP/JGC
California performs AI safety right that targets technical giants
