The Biden administration publicly unveiled its first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI) today, bolstering the role of the National Institute of Standards and Technology’s (NIST) AI Safety Institute (AISI) to be industry’s “primary port of contact” in the Federal government.
The NSM was required under President Biden’s AI executive order (EO) released nearly one year ago and fills a critical gap in AI guidance for the intelligence community.
Up until this point, the White House’s AI guidance has only covered non-national security systems.
The NSM directs the government to implement concrete and impactful steps to ensure that the U.S. leads the world’s development of safe, secure, and trustworthy AI; harness cutting-edge AI technologies to advance the government’s national security mission; and advance international consensus and governance around AI.
“The NSM is designed to galvanize federal government adoption of AI to advance the national security mission, including by ensuring that such adoption reflects democratic values and protects human rights, civil rights, civil liberties and privacy,” the fact sheet reads. “In addition, the NSM seeks to shape international norms around AI use to reflect those same democratic values and directs actions to track and counter adversary development and use of AI for national security purposes.”
As part of formally designating NIST’s AISI, the NSM lays out strengthened and streamlined mechanisms for the institute to partner with national security agencies, including the intelligence community, the Department of Defense, and the Department of Energy.
The memo calls on the AISI to collaborate with industry to test frontier AI models that might pose a threat to national security and issue guidance on a variety of topics, including how to address the misuse of AI to harass or impersonate individuals. The AISI is ahead of the curve with the NSM, announcing earlier this summer that is has signed AI testing pacts with Anthropic and OpenAI as well as releasing its first set of guidance on dual-use models.
Today’s NSM also doubles down on the National Science Foundation’s (NSF) National AI Research Resource (NAIRR), ensuring that researchers at universities, from civil society, and in small businesses can conduct technically meaningful AI research.
To ensure that the U.S. leads the world’s development of safe, secure, and trustworthy AI, the NSM directs actions to improve the security and diversity of chip supply chains, and to ensure that, as the U.S. supports the development of the next generation of government supercomputers and other emerging technology, it does so with AI in mind.
The NSM also makes collection on competitors’ operations against the AI sector a top-tier intelligence priority and directs relevant Federal agencies to provide AI developers with the timely cybersecurity and counterintelligence information necessary to keep their inventions secure.
On the international front, the memo directs the Federal government to collaborate with allies and partners to establish a stable, responsible, and rights-respecting governance framework to ensure the technology is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms.
National Security AI Framework Accompanies Biden’s NSM
Alongside the NSM, the White House released the first-ever guidance for AI governance and risk management for use in national security missions, complementing the administration’s previous guidance for non-national security missions.
Framework to Advance AI Governance and Risk Management in National Security provides further detail and guidance to implement the NSM, including requiring mechanisms for risk management, evaluations, accountability, and transparency. This guidance requires agencies to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses.
The framework will be updated regularly to keep pace with technical advances and ensure future AI applications are responsible and rights-respecting, the White House said.
According to a senior administration official, the accompanying framework document identifies both prohibited, as well as high-impact AI use cases, based on the risk that they pose to national security, international norms, democratic values, human rights, civil rights, civil liberties, privacy, and safety.
For example, there are clear prohibitions on use of AI to unlawfully suppress or burden the right to free speech or the right to legal counsel.
There’s also prohibited use cases around removing a human in the loop for actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons employment, the official said.
The official concluded, “With a lack of policy clarity and a lack of legal clarity about what can and cannot be done, we are likely to see less experimentation and less adoption than with a clear path for use, which is what the NSM and the framework tries to provide.”