The United States leads the world in the development of artificial intelligence. More AI startups raised first-time capital in the United States than the next seven countries combined1 and the United States is home to the best-known companies advancing AI technology.
Now the United States government intends to take the lead in governing AI. Compelled by the "rapid speed at which AI capabilities are advancing," President Biden issued a lengthy, far-ranging Executive Order on Monday setting out a "coordinated, Federal Government-wide" effort to govern the development and use of AI.
The Executive Order sets the stage for substantial federal oversight over the development and use of AI. Because it is an Executive Order—rather than legislation passed by Congress—the Order’s immediate effect is largely limited to the government agencies, contracts, projects, and benefit programs over which the President has authority as head of the Executive Branch. But the Order lays the groundwork for a broader policy framework that the White House aims to implement through legislation. Further, the Order does in some instances direct federal agencies to apply their existing legal powers to regulate AI. By way of example, the Order directs the Secretary of Commerce, in accordance with the Defense Production Act, to require developers of large, sophisticated AI models to provide information concerning the ownership, training, development, and production of the models to the government on an ongoing basis.
As its justification, the Executive Order notes that AI could, among other things, "pose risks to national security," "exacerbate … bias" and "displace and disempower workers." Elsewhere in the Order, President Biden remarks that AI is what it is because of us: "In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built."
The Order states that a "society-wide effort" must take place to "mitigat[e] [AI’s] substantial risks" and "[h]arness AI for good." President Biden places the "highest urgency" on governing the "development and use of AI." Among other things, the Order suggests that:
- The federal government will require that AI developers ensure that Americans can tell when content is generated using AI and when it is not.
- To promote innovation, the government will stop "unlawful collusion and address[] risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors … ."
- The government will ensure that collective bargaining workers have a seat at the table to "ensure that they benefit from" the opportunities of AI. The government will also examine the impact AI will have on the American workforce.
- The Biden Administration will not "tolerate the use of AI to disadvantage" already-disadvantaged groups.
- The government, through the USPTO, will publish guidance concerning the use of AI in the inventive process, and how to analyze inventorship issues.
- The Department of Homeland Security will develop a program to mitigate AI-related IP risks and theft.
- The Administration will work with other countries to "develop[] a framework to manage AI's risk" and "promote common approaches" to the technology.
- The Department of Homeland Security “will capitalize on AI’s potential to improve U.S. cyber defense.”
The Order requires multiple government agencies to develop further guidance, plans, and regulation concerning AI within the next 30 to 365 days. Other key actions implicated by the Order include:
- Creating a new interagency AI Policy Committee to coordinate and review federal AI policies and initiatives.
- Developing and implementing a national AI research and development strategy and roadmap.
- Enhancing the AI workforce and talent pipeline in the United States, including through immigration reforms and training programs.
- Establishing reporting and auditing requirements for large AI models and cloud service providers that could pose significant risks to national security, cybersecurity, or public safety.
- Developing and deploying AI capabilities to detect and remediate vulnerabilities in critical software, systems, and networks.
- Evaluating and mitigating the potential for AI to be misused to enable the development or use of chemical, biological, radiological, and nuclear threats, especially biological weapons.
- Developing standards, tools, and best practices for authenticating, labeling, and detecting synthetic content, such as deepfakes, and preventing the generation of child sexual abuse material or non-consensual intimate imagery.
- Strengthening the protection of privacy and civil rights in the use of AI by federal agencies and regulated entities, including through updating guidance and regulations.
- Engaging with allies and partners to promote and develop AI standards and norms that reflect democratic values and human rights worldwide.
While in some places the Order speaks in broad terms and delegates judgment to agency heads, elsewhere it dives into the details. It strongly implies, for example, that the Administration will closely regulate AI that attempts to forecast where crimes might occur based on historical crime data.
The Order represents a substantial first step by the United States Government into the regulation of AI. In the end though, the Executive Branch is limited in what it can do without Congressional legislation. The White House has expressly sought the assistance and support of Congress, where a number of AI-related pieces of legislation have already been introduced. Additionally, in advancing its AI principles of security, safety and trust, the Administration signaled its intention to continue working with the technology industry in shaping policy, as it did when it announced the voluntary commitments of AI developers in late July.
We will address the Order and its impact in further detail in future posts, and analyze the draft rules promulgated by the government agencies charged with enacting this new, far-reaching regulatory framework.