Articles You May Have Missed

National Security and Government Contractor Implications of Biden AI Executive Order

Understanding the Biden Administration’s First Robust Attempt to Shape the Development of the Emerging AI Industry

At the end of October 2023, the Biden administration issued a widely anticipated executive order on artificial intelligence (AI). The Executive Order on the Safe, Secure, and Trustworthy Development of Artificial Intelligence (the EO) addresses a multitude of issues reflecting an emerging national policy on AI.

Generative and other emerging AI applications have myriad implications for U.S. national and global security. To address the multitude of issues reflecting an emerging national policy on AI, the Biden administration issued a widely anticipated executive order on artificial intelligence (AI) at the end of October 2023.

First, the EO requires that developers of the most powerful AI systems, so-called “dual‑use foundational models,” conduct and report the results of safety testing, and share other critical information with the federal government. These foundational models implicate national security, economic security, and public health and safety. Companies will also be required to report planned activities in training dual-use AI, developing or producing such systems, and to outline the precautions they are taking during the development process.

The administration invokes the Defense Production Act as the authority for compelling disclosure of this information, much of which will be proprietary. Although several leading AI companies already share such information voluntarily, this provision seeks greater disclosure regarding companies’ deployment of AI and the testing and risk assessments underpinning AI models.

This disclosure requirement is related to one of the EO’s many agency directives. Specifically, the EO directs the National Institute of Standards and Technology (NIST) to develop standards to verify that AI systems are safe, secure, and trustworthy, in the form of companion guidance to already-existing NIST publications, such as the AI Risk Management Framework (NIST AI 100-1).

Second, the EO calls for regulations to require U.S. Infrastructure as a Service (IaaS) providers to report transactions with foreign persons to train large AI models with potential capabilities that could be used in malicious cyber activity. The forthcoming regulations will also require IaaS providers to prohibit foreign resellers from providing services unless they provide details about the end users, end uses, and the underlying applications.

This requirement addresses similar concerns that the Department of Commerce flagged in its October 17, 2023 advanced semiconductor rules concerning cloud-based access to advanced computing and AI training models.

Finally, the EO recognizes the potential for misuse of AI in a manner that might allow non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons. As such, the EO directs the Department of Homeland Security, in consultation with the Department of Energy and the Office of Science and Technology Policy private AI laboratories, and academia, to evaluate CBRN threats from AI models and develop means to mitigate these risks. Various government agencies are also directed to consider what government data might present security risks with respect to CBRN weapons and to ensure that such data is restricted from public access and not used to train AI systems.

The EO also includes numerous developments and takeaways for government contractors, as the administration, as it often does, leverages its purchasing power to effect policy goals.

First, the EO provides guidance for the procurement of AI products and services by federal agencies. The EO directs the Office of Management and Budget (OMB) to specify minimum risk-management practices for governmental use of AI. These requirements include:

  • establishing a Chief Artificial Intelligence Officer (CAIO) charged with AI implementation in the agency;
  • defining the CAIO’s roles and responsibilities;
  • requiring certain agencies to create an AI governance board;
  • implementing minimum risk management practices;
  • identifying AI uses that impact individual rights or safety;
  • recommending ways to reduce barriers to AI use;
  • requiring certain agencies to develop AI strategies and pursue advantageous use of AI;
  • external AI testing for generative AI, safeguards preventing discriminatory use or other misuse of AI, watermarking, minimum risk management practices, independent assessment of vendor effectiveness and risk mitigation claims, documentation and oversight of AI, maximizing value of contracted AI services, and incentivizing continuous improvement of AI;
  • training agency employees on AI; and
  • public reporting on compliance with these requirements.

Additionally, OMB is tasked with requiring that agencies make sure that any contracts for AI services address: privacy, civil rights, and civil liberty concerns; ownership and security of data; and means to prevent misuse, unauthorized use, or corruption of AI systems.

Second, Section 4.5(d) of the EO directs the Federal Acquisition Regulatory Council to consider amending the Federal Acquisition Regulation (FAR) to reduce risks posed by “synthetic content” and to require identification of synthetic content produced by AI systems used by the federal government or on its behalf. The aim is to promote trust in the integrity and authenticity of U.S. government digital content by establishing transparency regarding the provenance of generated content and preventing generation of inappropriate or inaccurate content.

Third, in line with this goal, the EO directs the Secretary of Commerce (in consultation with other agencies) to develop standards, tools, methods, and practices for use by federal government agencies and contractors: (1) to authenticate and track the provenance of AI‑generated material; (2) to label AI content using methods such as “watermarking”; (3) to detect synthetic content; (4) to prevent AI from producing certain abusive, explicit materials, such as nonconsensual, AI-generated representations of real people (i.e., “deepfakes”); to (5) test; and (6) audit software for these purposes.

Pending release of this guidance, agencies seeking to obtain AI products or services are required to implement “minimum risk-management practices” defined in Section 10.1(b)(iv). These practices are derived from the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework, and they include: (1) public consultation; (2) review of data quality; (3) assessing and mitigating discriminatory impacts from AI; (4) providing notice when an agency employs AI; (5) continuously monitoring and evaluating AI in use; and (6) granting separate, “human” consideration and remedies for adverse decisions made by AI systems.

Section 7.2 also requires agencies to “use their respective civil rights and civil liberties offices and authorities…to prevent and address unlawful discrimination and other harms that result from uses of AI in Federal Government programs and benefits administration.”

Finally, beyond the directives and proposed regulatory requirements, the EO suggests business opportunities for potential recipients of federal grant and contract funding. It directs the General Services Administration to facilitate government-wide acquisition solutions for AI services and products, thereby creating future consolidated contracting opportunities to provide AI tools to the federal government.

Specifically, the EO encourages acceleration of grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity program and through the Advanced Research Projects Agency-Infrastructure (ARPA-I) to explore transportation-related opportunities and challenges of AI, including regarding software-defined AI enhancements impacting autonomous mobility ecosystems.

The EO also proposes a pilot project to “identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks.” It also seeks to promote competition and innovation in the semiconductor industry, by working in concert with the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022 to use AI in the industry and provide other assistance, particularly for small businesses, and to share data for CHIPS research and development programs.

The EO requires implementation in the form of agency-issued guidance and potential legislation to effectuate some of its more ambitious aspects. Given the EO’s tight deadlines, in the coming months we expect to see new agency-level AI policies, as well as requests for information and requests for comments on proposed rules.

IaaS providers and developers of “dual-use” AI should anticipate a roll out of reporting requirements and requests for information. Similarly, government contractors should expect that reporting regarding their AI models may become part of the proposal evaluation and embedded as contract requirements, particularly as it relates to safety of AI products, routine testing for bias, and data security and privacy protections. Contractors should also anticipate requirements for AI transparency and provenance to become a feature in government AI procurement.

Although many of the policy details are still under development, the EO represents the Biden administration’s first robust attempt to shape development of the AI industry.

For additional articles on this Executive Order, visit the Morrison Foerster AI Resource Center, where this article was originally published.

Tina D. Reynolds is co-chair of Morrison Foerster’s Government Contracts & Public Procurement practice. She counsels a wide variety of government contractors on compliance with federal acquisition and ethics regulations. She drafts and negotiates contracts on their behalf and has been involved with numerous internal investigations and compliance reviews, and with bid protest, contract claims, and False Claims Act litigation. She also advises on M&A transactions involving government contractors.

Charles L. Capito is a partner in both Morrison Foerster’s National Security and Government Contracts + Public Procurement practices. In the National Security space, he has significant experience counseling clients on the complex and evolving considerations related to the Committee on Foreign Investment in the United States (CFIUS). Charles frequently helps investors and U.S. businesses through every aspect of the CFIUS process, from understanding and allocating CFIUS risk.

Brandon L. Van Grack co-chairs Morrison Foerster’s National Security and Global Risk + Crisis Management groups. His practice focuses on investigations, criminal defense, and compliance matters involving sanctions and export controls, foreign investment, and cyber incidents. Brandon’s arrival to Morrison Foerster follows more than a decade of service at the U.S. Department of Justice (DOJ), where he held multiple senior national security positions.

Lyle F. Hedgecock is an associate in the Government Contracts practice group in the Washington, D.C. office. Prior to joining Morrison & Foerster, Lyle served as a law clerk at the U.S. Court of Federal Claims and had a decorated military career in the U.S. Air Force. Following his active-duty service, he provided legal services to the Air Force Legal Office, Joint Base Andrews, to resolve high-profile white-collar investigations of suspected Federal Acquisition Regulation violations and counseled government clients on Federal Acquisition Regulation compliance issues.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.


Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library