Opinion

Biden administration secures commitment of leading AI developers to voluntarily commit to prioritize public safety, security, and trust.

Published Date
Jul 24 2023
Allen & Overy’s AI practice continues to track the latest developments in artificial intelligence.  Daren Orzechowski, Will Wray, and Jasmine Shao of our US Technology Team summarize an important recent announcement from the Biden Administration and leading AI companies regarding the industry’s commitment to certain AI guidelines and principles.

On Friday, July 21, 2023, the White House announced that the Biden administration secured voluntary commitments from seven leading artificial intelligence companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI (the “AI Developers”)—concerning the responsible development of AI.

Our main legal takeaways are that: (1) these commitments probably cannot be enforced directly by anyone as they are neither a statute or regulation, but (2) they show that the AI Developers are willing to engage with the Biden administration and government generally, and, presumably, one another, in the process of developing AI-related laws and regulations, and lastly, (3) the AI Developers unanimously agreed to watermark (either literally or with digital identifiers) AI-generated content.  We describe the commitments below.

The Biden administration secured eight voluntary commitments, each of which it categorized within its three fundamental principles of “safety, security, and trust”. The AI Developers promised to:

  • Guarantee product safety before public introduction through rigorous internal and external security testing of AI systems. This includes sharing risk management information with industry peers, governments, civil society, and academia.
  • Put security first by investing in cybersecurity and safeguards against insider threats, protecting essential AI system components like model weights. The companies also commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
  • Earn public trust by developing robust mechanisms for AI transparency, including watermarking AI-generated content and publicly reporting AI systems’ capabilities and limitations. The companies further pledge to prioritize research into the societal risks posed by AI, including harmful bias and privacy violations.

The AI Developers stated that they intend for these commitments – which are effective “immediately” – to remain in place until regulations covering similar issues are in place. 

The Biden administration’s statement references its ongoing efforts to craft further executive order(s), advance legislation, and work with international allies to establish a global framework for the safe development and use of AI. We can therefore expect more activity in this area.

This latest effort builds upon the administration’s Blueprint for an AI Bill of Rights, which was published last year. At the time, the White House’s Office of Science and Technology Policy identified five principles that should “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” Those principles are: (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, consideration, and fallback. Elements of these principles can be seen in last week’s announcement and the overall AI policy focus on “safety, security, and trust”.

We had three legal takeaways from this recent announcement:

First, neither the administration nor the AI Developers suggest that these commitments have the force of law. Given the context in which they were offered and the broad language in which they are phrased, it is doubtful that the commitments could be directly legally enforced as they constitute neither a law or a regulation. Whether they might be used indirectly is another issue. It is worth drawing a parallel to the FTC’s enforcement of company’s privacy policies on the grounds that a company’s failure to abide by its own privacy policy violates Section 5 of the FTC Act.  See Federal Trade Commission, Privacy and Security Enforcement (“When companies tell consumers they will safeguard their personal information, the FTC can and does take law enforcement action to make sure that companies live up these promises.”).

Second, the commitments demonstrate the major generative AI players are willing to engage with the administration and one another as they continue to develop their products. The administration and its agencies have made clear that regulating AI is a priority, and these AI Developers have signaled a preference to engage in the lawmaking process.

Third, one of the most concrete commitments is the promise to “[d]evelop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content.” The AI Developers unanimously agreed that “it is important for people to be able to understand when audio or visual content is AI-generated,” and promised to “develop tools or APIs to determine if a particular piece of content was created with their system.” While some may greet this news with a sigh of relief, others may wonder if this diminishes the utility of AI tools or interferes with users’ rights to use and modify AI output freely.

Additional government activity in the United States will follow. For companies that are either developing or licensing AI technology, it is advisable to consider the principles from the Blueprint for an AI Bill of Rights as well as this latest commitment from the AI Developers, as these guidelines will likely form the basis for future laws and regulations.

 
Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger

Related capabilities