Opinion

Zooming in on AI - #9: Understanding California's New AI Legislation

Read Time
8 mins
Published Date
Oct 23 2024
Helen Christakos and Sonya Aggarwal of our U.S. privacy and data security practice and Eva Wang of our technology transactions practice look at the recent flurry of artificial intelligence (AI) bills that have passed in California and their impact on AI governance.

California Governor Gavin Newsom recently passed several AI-related bills, which address the application of AI across several industries and clarify key definitions regarding AI. Below, we provide an overview addressing some of the recently passed AI-related bills.

AI training data transparency

On September 28, 2024, Governor Newsom signed Assembly Bill 2013, “Artificial Intelligence Training Data Transparency.” This bill creates developer disclosure obligations for generative AI systems or services[1] in certain circumstances. Assembly Bill 2013 will require developers of the system or service to post on their internet website documentation the data used to train generative AI system or service in as much as detail as possible. This documentation is effective on or before January 1, 2026, and before each time that a generative AI system or service, released on or after January 1, 2022, is made available to Californians for use, regardless of whether the terms of the use include compensation.

The documentation will require, a high-level summary of the datasets used in the development of the system or service, and will include (but is not limited to): 

  • The sources or owners of the datasets;
  • A description of how the datasets further the intended purpose of the AI system or service;
  • The number of data points included in the datasets, which may be in general ranges, and with estimated figures for dynamic datasets;
  • A description of the types of data points within the datasets;
  • Whether the datasets include any data protected by copyright, trademark, or patent, or whether the datasets are entirely in the public domain;
  • Whether the datasets were purchased or licensed by the developer;
  • Whether the datasets include personal information, as defined in subdivision (v) of Section 1798.140;
  • Whether the datasets include aggregate consumer information, as defined in subdivision (b) of Section 1798.140;
  • Whether there was any cleaning, processing, or other modification to the datasets by the developer, including the intended purpose of those efforts in relation to the AI system or service;
  • The time period during which the data in the datasets were collecting, including that the data collection is ongoing (if applicable);
  • The dates the datasets were first used in the development of the AI system or service; and 
  • Whether the generative AI system or service used or continuously uses synthetic data generation in its development.

A developer is not required to post documentation regarding the data used to train a generative AI system or service if the generative AI system or service is developed: (i) solely to help ensure security and integrity; (ii) solely for the operation of aircraft in national airspace; and (iii) for national security, military, or defense purposes that is made available only to a federal entity. 

AI risk management

On September 29, 2024, Governor Newsom signed Senate Bill 896, “Generative Artificial Intelligence Accountability Act,” into law. This bill will require the California Office of Emergency Services to perform a risk analysis of potential threats posed by generative AI to California’s critical infrastructure, (including uses of generative AI that could lead to mass casualty events), and provide a high-level summary of the analysis to the California legislature.

Additionally, Senate Bill 896 will require any state agency or department that utilizes generative AI to directly communicate with a person regarding government services and benefits to ensure that those communications include both: (i) a notice that indicates to the person that the communication was generated by generative AI; and (ii) information describing how the person may contact a human employee of the department.

AI and health care

On September 28, 2024, Governor Newsom signed Assembly Bill 3030, “Artificial Intelligence in Health Care Services,” into law. Under this bill, health facilities, clinics, physician’s offices, or group practices that use generative AI to generate written or verbal patient communications relating to patient clinical information will be required to include in their communications both: (i) a disclaimer that indicates to the patient that a communication was generated by generative AI; and (ii) clear instructions describing how a patient may contact a human health care provider, employee or other appropriate person. If, however, the communication generated by generative AI is read and reviewed by a human licensed or certified health care provider, then the required disclosures in the patient communication do not apply. 

Any violation of the provisions of Assembly Bill 3030 will cause the physician to be subject to the jurisdiction of the Medical Board of California or Osteopathic Medical Board of California. 

Governor Newsom also signed Senate Bill 1120, which will require health care service plan or disability insurers, including specialized health care service plans or specialized health insurers, that use AI, algorithm, or other software tools for the purpose of utilization review or utilization management functions, or that contract with an entity that uses those tools, to ensure compliance with specified requirements, including that the AI, algorithm, or other software tool bases its determination on specified information and is fairly and equitably applied. The AI, algorithm, or other software tool should base its determination on the patient’s medical or other clinical history, individual clinical circumstances as requested by the requesting provider, or other relevant clinical information contained in the patient’s medical or clinical record. The AI, algorithm, or other software tool should not base its determination on a group dataset, supplant health care provider decision-making, or discriminate, directly or indirectly, against patients in violation of state or federal law.

AI and personal information

On September 28, 2024, Governor Newsom signed Assembly Bill 1008, “California Consumer Privacy Act of 2018 (CCPA): personal information,” into law. Under this bill, the CCPA is amended to introduce new privacy obligations for AI systems trained on personal information by: (i) expanding the definition of personal information to include AI system outputs, such as model weights and tokens derived from personal information, and biometric data, such as fingerprints and facial recognition data collected without a consumer’s knowledge; and (ii) expanding the definition of sensitive personal information to include neural data, such as information generated from measuring the activity of a consumer’s central or peripheral nervous system. 

AI and automated dialing systems and voice messages

On September 20, 2024, Governor Newsom signed Assembly Bill 2905, “Telecommunications: automatic dialing-announcing devices: artificial voices,” into law. The bill amends regulations concerning the use of automatic dialing-announcing devices (ADADs) in telecommunications. Pursuant to such existing regulations, when an ADAD is used for telephone calls, the call must begin with a natural, unrecorded voice stating the nature of the call, the business or organization represented, and asking for the recipient's consent to hear the prerecorded message. Under this bill, if the prerecorded message includes a voice that is generated or significantly altered using AI, the caller must also inform the recipient of the use of such voice upfront. 

AI and education

On September 28, 2024, Governor Newsom signed Assembly Bill 2885, “Artificial intelligence,” into law. Pursuant to this bill, a uniform definition of AI, defined as “engineered or machine-based system with varying levels of autonomy that can infer from input to generate outputs that influence physical or virtual environments,” is established under various California laws. Key provisions of existing laws relating to AI that are affected by this uniform definition include:

  • Government Operations Agency: Requires the Secretary of Government Operations to create a plan to evaluate the impact of AI-generated or manipulated deepfakes on state government, businesses and residents (see (Section 11547.5 of the Government Code); 
  • Department of Technology: Requires the Department of Technology to conduct a comprehensive inventory of high-risk automated decision systems used by state agencies, which rely on machine learning and AI to assist or replace human decision-making (see Section 11546.45.5 of the Government Code);
  • Local Agencies: Requires each local agency to report publicly on economic development subsidies and any job losses or replacements due to AI or automation (see Section 53083.1 of the Government Code);
  • California Online Community College: Utilizes AI and related technologies to build student support systems and industry-valued online education programs (see Section 75002 of the Education Code); and 
  • Social Media Companies: Required to submit semiannual reports to the Attorney General detailing how content on their platforms is managed, including the role of AI in such management (see Section 22675 of the Business and Professions Code).

California AI transparency act

On September 19, 2024, Governor Newsom signed Senate Bill No. 942, “California AI Transparency Act,” into law. Under this bill, which comes into effect on January 1, 2026, new regulations for generative AI are created to ensure transparency in AI-generated content. Key provisions include:

  • AI Detection Tools: Any person that creates codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of the state (Covered Providers) must offer a free, publicly accessible AI detection tool that meets specified criteria.
  • Disclosures: Covered Providers using any AI system that can generate derived synthetic content, including text, images, video and audio that emulates the structure and characteristics of such system’s training data (GenAI System) must offer users the option to include a manifest disclosure in AI-generated content (images, videos, audio, or combinations) that clearly and conspicuously identifies the content as AI-generated. Covered Providers must also include a latent disclosure in AI-generated content when technically feasible that conveys information about such content's origin, either directly or through a link to a permanent website.
  • License Revocation: If a third-party licensee modifies any GenAI System such that it can no longer provide these disclosures: (i) Covered Providers must revoke the applicable license to the GenAI System within 96 hours; and (ii) such third-party licensee must cease using such GenAI System.
  • Penalties: Violations by Covered Providers can result in civil penalties of $5,000 per incident, while third-party licensees can face civil actions, including injunctive relief and attorney's fees, if they continue using a revoked GenAI System.

Conclusion

The enactment of California AI-related legislation is a clear example of how states are taking proactive steps to regulate certain uses of AI. In the absence of federal legislation, it’s reasonable to expect that other states will enact similar legislation.

Footnotes

[1] “Artificial intelligence system or service” means a machine-based system or service that can, for a given set of human-defined objectives, generate content and make predictions, recommendations, or decisions influencing a real or virtual environment.

Related capabilities