Opinion

Key AI Actions in response to President Biden's Landmark Executive Order

Published Date
Feb 6 2024
Daren Orzechowski, Alex Touma, and Jack Weinert examine the progress of the federal government towards achieving the directives set forth in President Biden’s Landmark Executive Order on AI.

Three months have passed since President Biden issued a landmark Executive Order that advances a coordinated, federal government-wide approach toward the safe and responsible development of AI.

The Executive Order included a wide range of federal regulatory principles and priorities regarding AI and directed a variety of federal agencies to promulgate technical standards and guidelines, with deadlines ranging from 90 days to 365 days from the date of the Order. For further information on the Executive Order, refer to our prior Tech Talk blog post here.

Last week, the Biden Administration published a Fact Sheet, touting “substantial progress in achieving the Executive Order’s mandate to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.” According to the Fact Sheet, all 90-day deadlines set forth in the Order have been completed by the respective federal agencies.

The key achievements of the agencies are as follows:

  • pursuant to authority granted under the Defense Production Act, developers of the most powerful AI systems must now report vital information, especially AI safety test results, to the Department of Commerce;
  • a draft rule, proposed by the Department of Commerce, compels U.S. cloud companies that provide computing power for foreign AI training to report that they are doing so;
  • risk assessments covering AI’s use in every critical infrastructure sector were completed by nine agencies;
  • a pilot of the National AI Research Resource (an effort to connect the government, researchers, and educators to data and information to advance AI research) was launched; catalyzing broad-based innovation, competition, and more equitable access to AI research;
  • an AI Talent Surge (an AI talent task force) was launched to accelerate hiring AI professionals across the federal government, including through a large-scale hiring action for data scientists;
  • the EducateAI initiative begun to help fund educators creating high-quality, inclusive AI educational opportunities at the K-12 through undergraduate levels;
  • funding of new Regional Innovation Engines (NSF Engines) was announced, including with a focus on advancing AI; and
  • an AI Task Force at the Department of Health and Human Services was established to develop policies to provide regulatory clarity and catalyze AI innovation in health care.

While these key achievements reflect quick advancements in federal involvement and oversight of AI, the federal government is primarily still in the fact-finding and industry consultation stages of regulating AI. Behind the scenes, however, we are tracking more than seventy unique federal bills relating to the regulation of AI. These federal bills address the risks of harmful deepfakes, mandatory watermarking of AI generated output, the application of Section 230 platform immunity to AI providers, protecting children and other “at risk” individuals, and ensuring transparency when AI is used in key decision-making processes.

The states, and particularly California and New York, have followed the lead set by the federal government. In particular:

  • The New York City Automated Employment Decision Tool Law took effect in July 2023 which requires employers or employment agencies that want to use an Automated Employment Decision Tool to ensure a bias audit was done before using the tool and make a variety of related disclosures to job candidates.
  • Gov. Gavin Newsom issued Executive Order N-12-23 in September 2023. Its purpose is to study the development, use, and risks of AI technology throughout the state and to develop a deliberate and responsible process for evaluation and deployment of AI within state government.
  • Senate Bill 896 (known as the Artificial Intelligence Accountability Act) was introduced in California in January 2024 and builds upon Gov. Gavin Newsom’s Executive Order N-12-23 by guiding the decision-making of state agencies, departments and subdivisions in the review, adoption, management, governance and regulations of automated decision-making technologies.

It is clear that regulatory oversight of AI technology by the federal and state governments is imminent. As Daren Orzechowski recently stated in an interview on Nasdaq TradeTalks, a balanced approach to introducing AI regulation is important in order to give new technology the room it needs to grow. What remains to be seen is whether regulations will be able to protect Americans’ privacy, advance equity and civil rights, and look out for the interests of consumers and workers, all while promoting innovation and competition and advancing American leadership around the world.

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger