Opinion

Zooming in on AI – #3: California SB 1047 – The potential new frontier of more stringent AI regulation?

Published Date
Sep 9 2024
Helen Christakos and Sonya Aggarwal of our U.S. privacy and data security practice and Eva Wang of our technology transactions practice look at California’s new AI bill that aims to balance AI development with ensuring public safety, security, and accessibility and is awaiting Governor Newson’s signature.

The "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (the “Act”), which was passed by the California legislation on August 29, 2024, and is awaiting Governor Newsom’s signature, is a proposed bill aimed at regulating the development and deployment of advanced artificial intelligence (“AI”) models. The Act aims to balance the promotion of AI development with ensuring public safety, security, and accessibility. It acknowledges the potential benefits of AI in fields like medicine, climate science, and creativity, while also recognizing risks such as the potential for misuse in creating weapons of mass destruction or cyber threats.

If enacted, the requirements of the Act would come into effect in stages:

  • On or before January 1, 2026, the Government Operations Agency (described in more detail below), must submit a report from the consortium to the California legislature with the CalCompute framework (a framework to be developed by the Government Operations Agency for the creation of public cloud computing cluster to advance the development and deployment of AI that is safe, ethical, and sustainable).
  • Beginning January 1, 2026, developers of covered AI models would be required to:
    • Annually retain a third-party auditor to perform an independent audit of their safety and security protocols and to produce an audit report.
    • Retain an unredacted copy of the audit report for as long as the covered model is available for commercial, public, or foreseeable public use plus five years.
    • Submit to the Attorney General a statement of compliance with these provisions and require developers to report AI safety incidences to the Attorney General.
  • On or before January 1, 2027 and annually thereafter, the Board of Frontier Models within the Government Operations Agency (described in more detail below), would require the Government Operations Agency to issue regulations to, among other things, update the definition of “covered model” and require the regulations to be approved before taking effect.

Key provisions of the Act

  1. Definitions and scope: The Act introduces key definitions for understanding its scope:
    • “Covered model”: refers to any AI model (and derivatives thereof), that meets certain criteria based on computing power, cost (over $100 million), and the extent of training. (The Act provides further detailed definitions for derivatives of such models and the conditions under which they are deemed to be Covered Models.)
    • “Critical harm”: refers to severe damages that an AI model could cause, such as mass casualties from cyberattacks or grave public safety threats.
    • “Advanced persistent threats”: describes sophisticated adversaries capable of using multiple attack vectors (including, but not limited to cyber, physical and deception), to compromise AI models.
    • “Board of frontier models”: a board of nine members within the Government Operations Agency and that operates independently of the Department of Technology. The California Governor may appoint an executive officer of the board, subject to Senate confirmation.
    • “Government Operations Agency”: an agency that contains the Board of Frontier Models.
  2. Safety and security protocol requirements: developers of covered AI models must implement comprehensive written safety and security protocols to manage risks throughout the model’s lifecycle. These protocols must include, but are not limited to:
    • Describe in detail the Company’s protections and procedures to prevent the model from posing unreasonable risks of causing or enabling critical harm;
    • State the Company’s compliance requirements in an objective manner and with sufficient detail and specificity to allow the developer or a third party to readily ascertain whether requirements of safety and security protocol have been followed;
    • Include testing procedures to assess the risks associated with modifications to the model after its initial training;
    • Retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years; and
    • Be reviewed and updated annually to reflect changes in the model's capabilities and industry best practices.
  3. Cybersecurity protections: Before training any covered AI model, developers are required to implement administrative, technical, and physical cybersecurity measures to prevent unauthorized access, misuse, or modifications. This includes developing the capacity for a full shutdown of the model if necessary, and ensuring safeguards against advanced persistent threats or other malicious actors.
  4. Full shutdown procedures: Developers must establish and document the conditions under which a “full shutdown” of the model or its derivatives would be enacted to prevent potential harm. This includes considering the impact of a shutdown on critical infrastructure.
  5. Compliance and third-party auditing requirements: Beginning January 1, 2026, developers of covered AI models must conduct annual third-party audits of their safety and security protocols. Developers are also required to publish redacted versions of their safety and security protocols and the results of their audits, and submit full versions of their audits to the California Attorney General upon request. Additionally, developers must submit annual compliance statements, signed by a senior corporate officer, detailing any risks and measures taken to prevent critical harm.
  6. Incident reporting: Any AI safety incidents involving covered models must be reported to the California Attorney General within 72 hours of the developer becoming aware of the incident. The report should detail the nature of the incident and the steps taken to address the risks associated with it.
  7. Co-Existence with federal contracts and pre-emption: The Act does not apply to products or services to the extent that the requirements would strictly conflict with federal government entity contracts. The Act’s provisions do not supersede existing federal laws and may be adjusted or supplemented based on federal regulations or evolving technological standards. If any part of the Act is held invalid, the remaining provisions shall still be enforceable.
  8. Guidance and best practices: Developers are encouraged to follow industry best practices and consider guidance from organizations such as the U.S. Artificial Intelligence Safety Institute and the National Institute of Standards and Technology.
  9. Civil penalties and enforcement actions: The Act grants the Attorney General authority to initiate civil actions for violations, including:
    • Penalties for violations: Fines are imposed based on the severity of the violation:
      1. For violations causing death, bodily harm, property damage, theft, or imminent public safety threats, fines are set at a maximum of 10% of the cost of the computing power used to train the AI model (calculated using average market prices at the time of training) for the first offense, increasing to 30% for subsequent violations; and
      2. Additional penalties are prescribed for violations related to labor laws, safety protocols, and other specific sections of the Act.
    • Injunctive relief and monetary damages: Courts may issue injunctions, award compensatory and punitive damages, and grant attorney fees and costs to enforce the Act’s provisions.
    • Contractual limitations on liability: Any contract or agreement that attempts to waive, limit, or shift liability for violations is deemed void. Courts are empowered to impose joint and several liability on affiliated entities if they attempt to limit or avoid liability through corporate structuring.
  10. Assessment of developer conduct: In determining whether a developer exercised reasonable care, regulators may consider the quality and implementation of the developer’s safety and security protocols, the thoroughness of risk management practices, and comparisons to industry standards.
  11. Whistleblower protections: The Act protects employees of AI developers and their contractors/subcontractors who disclose information to the Attorney General or Labor Commissioner regarding non-compliance with safety standards or risks of critical harm. The Act prohibits retaliation against whistleblowers and mandates clear communication of employee rights. Additionally, developers must establish an internal process for employees to report violations anonymously.
  12. Public disclosure and transparency: The Attorney General and Labor Commissioner may release complaints or summaries thereof if doing so serves the public interest, with sensitive information redacted to protect public safety and privacy.
  13. Creation of the Board of Frontier Models: The Act establishes the Board of Frontier Models within the Government Operations Agency, which will regulate AI models posing significant public safety risks:
    • The Board consists of nine members, including experts from AI safety, cybersecurity, and other fields. Members are appointed by the Governor, Senate, and Assembly.
    • The Board will oversee the establishment of thresholds for defining AI models subject to regulation, auditing requirements, and guidance for preventing critical harms.
  14. Establishment of CalCompute: The Act proposes the creation of CalCompute, a public cloud computing cluster designed to foster safe, ethical, and equitable AI development. CalCompute aims to:
    • Support research and innovation in AI and expand access to computational resources.
    • Be established within the University of California system, if feasible, with funding options including private donations.
    • The Act outlines a framework for the creation and operation of CalCompute, including the governance structure, funding, and equitable access parameters.
  15. Public access and confidentiality: While the Act imposes some limitations on public access to safety protocols and auditors' reports to protect proprietary information and public safety, it is designed to balance transparency with the need for confidentiality.

This detailed regulatory framework, if enacted, would ensure that AI technologies developed and deployed in California adhere to high standards of safety, accountability, and ethical practice, while also promoting innovation and equitable access to technological resources.

Related capabilities