On October 24, 2024, President Biden signed the first-ever National Security Memorandum (“NSM”) focused on artificial intelligence (“AI”), pursuant to subsection 4.8 of Executive Order 14110. The NSM provides guidance on developing, employing, and strengthening AI usage within the federal government.
The NSM outlines three main objectives which serve as guideposts in directing the U.S. Government in “appropriately harnessing AI models and AI-enabled technologies . . .”
Below are the NSM’s requirements that support each objective.
Objective 1: Leading the world’s development of safe, secure, and trustworthy AI.
- The Department of Defense (“DoD”) and the Department of Homeland Security (“DHS”) must assist in attracting and bringing individuals with relevant AI experience to the U.S.
- Within 180 days of the NSM (April 22, 2025), the Chair of Economic Advisers must prepare an analysis of the AI talent market in the U.S. and other countries.
- The DoD, Department of Energy (“DoE”), and the Intelligence Community (“IC”) must consider the applicability of large-scale AI in achieving their mission.
- The National Science Foundation (“NSF”) must use the National AI Research Resource (“NAIRR”) pilot project and any future NAIRR efforts to ensure AI research in the U.S. remains competitive.
- Within 180 days of the NSM (April 22, 2025), the DoE must launch a pilot project to evaluate the performance and efficiency of federated AI and data sources for frontier AI-scale trainings, fine-tuning, and inference.
- The AI Safety Institute (“AISI”) will serve as the primary U.S. Government point of contact with private sector AI developers.
- Within 180 days of the NSM (April 22, 2025), AISI will issue guidance for AI developers on how to test, evaluate, and manage risks to safety, security, and trustworthiness arising from dual-use foundation models.
- All agencies that conduct or fund safety testing and evaluations of AI systems must share the results of those evaluations with AISI within 30 days of completion.
Objective 2: Responsibly harness powerful AI to achieve national security objectives.
- Covered agencies must revise their hiring and retention policies and strategies to advance responsible AI adoption.
- Within 120 days of the NSM (February 21, 2025), covered agencies must identify education and training opportunities to increase AI competency within their agencies.
- Each agency that uses AI on national security systems (“NSS”) must take all appropriate steps to accelerate responsible approval of AI systems.
- Within 150 days of the NSM (March 23, 2025), the DoD must evaluate the feasibility of advancing, increasing, and promoting co-development and shared use of AI and AI-enabled assets with select allies and partners.
- Heads of covered agencies must monitor, assess, and mitigate risks directly tied to their agency’s development and use of AI. Some of those risks include privacy harms, risks to physical safety, discrimination, and data spillage.
- The Framework to Advance AI Governance and Risk Management in National Security (“AI Framework”), introduced in the NSM, will require covered agencies to follow certain requirements. Some of these requirements include, but are not limited to, having a Chief AI Officer, having an AI Governance Board, and having sufficient training programs to enable the proper use of AI systems.
- Within 150 days of the NSM (March 23, 2025), the National Manager for NSS must issue minimum cybersecurity guidance and direction for AI used as a component of NSS.
Objective 3: Fostering a stable, responsible, and globally beneficial international AI governance landscape.
- Within 120 days of the NSM (February 21, 2025), a strategy for the advancement of international AI governance norms must be produced. This strategy must be in line with safe, secure, and trustworthy AI as well as democratic values. Further, the strategy must discuss an approach to working in international institutions.
Companies in the AI space should keep a close eye on how the departments and agencies mentioned above implement the requirements of this NSM, in particular how the administration defines “safe, secure, and trustworthy AI,” as it may portend how the government regulates AI in the years to come.