Daren Orzechowski,
Alex Touma, and
Jack Weinert examine the progress made by US federal agencies towards achieving the directives set forth in President Biden’s Landmark Executive Order on AI (“AI Executive Order”) in the nine months since President Biden issued the AI Executive Order.
Background:
Biden’s landmark AI Executive Order advanced a coordinated, federal government-wide approach toward the safe and responsible development of AI. For further information on the AI Executive Order, refer to our prior A&O Shearman on Tech blog post here.
The AI Executive Order included a wide range of federal regulatory principles and priorities regarding AI and directed a number of federal agencies to promulgate technical standards and guidelines, with deadlines ranging from 90 days to 365 days from the date of the AI Executive Order.
On January 29, 2024, the Biden Administration published a Fact Sheet (“January Fact Sheet”) describing “substantial progress in achieving the AI Executive Order’s mandate to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.” For further information on the January Fact Sheet, refer to our prior A&O Shearman on Tech blog post here.
On July 26, 2024, the Biden Administration published a new Fact Sheet (“July Fact Sheet”) that reports on the progress made by federal agencies towards achieving the directives set forth in the AI Executive Order. In addition to achieving all 90-day deadlines set forth in the AI Executive Order, as reported in the January Fact Sheet, the July Fact Sheet reports that the respective federal agencies have also achieved all 270-day deadlines set forth in the AI Executive Order.
Key achievements:
Below we set forth some of the key achievements of the federal agencies as set forth in the July Fact Sheet. For complete details on all the achievements, please see the July Fact Sheet.
Managing risks to Safety and Security:
- The U.S. AI Safety Institute released for public comment new technical guidelines for leading AI developers in managing the evaluation of misuse of dual-use foundation models.
- The National Institute of Standards and Technology (“NIST”) published final frameworks on managing generative AI risks and securely developing generative AI systems and dual-use foundation models.
- The Department of Energy (“DoE”), in coordination with interagency partners, developed and expanded AI testbeds and model evaluation tools.
- The Department of Defense (“DoD”) and Department of Homeland Security (“DHS”) reported findings from their AI pilots to protect vital government software.
- The Gender Policy Council and Office of Science and Technology Policy issued a call to action to combat image-based sexual abuse, including synthetic content generated by AI.
Bringing AI talent into government:
- The federal government increased its AI capacity for both national security and non-national security missions by hiring over 200 individuals to date, including through the Presidential Innovation Fellows AI cohort and the DHS AI Corps.
- The White House Office of Science and Technology Policy announced new commitments from across the technology ecosystem, including nearly USD100m in funding, to bolster the broader public interest technology ecosystem and build infrastructure for bringing technologists into government service.
Advancing responsible AI innovation:
- The Department of Commerce prepared and will soon release a report on the potential benefits, risks, and implications of dual-use foundation models (for which the model weights are widely available), including related policy recommendations.
- The National AI Research Resource pilot awarded over 80 research teams access to computational and other AI resources to support the nation’s AI research and education community.
- The DoE released a guide for designing safe, secure, and trustworthy AI tools for use in education.
- The U.S. Patent and Trademark Office published guidance on evaluating the eligibility of patent claims involving inventions related to AI technology, as well as other emerging technologies.
- The National Science and Technology Council issued a report on federal research and development to advance trustworthy AI over the past four years.
- The National Science Foundation (“NSF”) launched a USD23 million initiative to promote the use of privacy-enhancing technologies to solve real-world problems, including related to AI.
- The NSF announced millions of dollars in further investments to advance responsible AI development and use throughout our society through NSF’s ExpandAI program, which helps build capacity in AI research at minority-serving institutions while fostering the development of a diverse, AI-ready workforce.
Advancing U.S. leadership abroad:
- NIST issued a comprehensive plan for U.S. engagement on global AI standards.
- The Department of State (“DoS”) developed guidance in close coordination with the NIST and the U.S. Agency for International Development for managing risks to human rights posed by AI.
- NIST launched a global network of AI Safety Institutes and other government-backed scientific offices to advance AI safety at a technical level.
- The DoS launched a landmark United Nations General Assembly resolution on the promotion of “safe, secure and trustworthy” AI systems.
- The DoS, in collaboration with the DoD, expanded global support for the U.S.-led Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy.
Next steps:
But wait, there is more. The final deadline for the federal agencies to achieve the remaining 13 directives in the AI Executive Order is October 30, 2024. Stay tuned for our next A&O Shearman on Tech blog post.