The Biden Administration released its 111-page AI Executive Order (EO) one year ago on Oct. 30, 2023, and since then other large-scale AI moves have kicked into gear across the Federal government, including standing up the National Institute of Standards and Technology’s (NIST) AI Safety Institute (AISI).

According to the White House accounting of all that activity released this week, Federal agencies have completed on schedule each action that the EO tasked for this past year – more than one hundred in all.

The EO directed sweeping actions to manage AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.

Both the administration and private sector commenters assigned high marks to the overall effort this week.

“One year ago marked the most consequential week in the history of AI policymaking,” said ITI President and CEO Jason Oxman. “Governments worldwide must continue to advance this work to fully leverage AI’s transformative power.”

The AI EO launched a whole-of-government effort to advance the responsible development and use of AI, including key inaugural guidance for the Federal government’s own use and acquisition of the technology:

  • In March, the Office of Management and Budget (OMB) issued guidance for Federal agencies’ use of AI, requiring agencies to implement and publicly document how they are mitigating any risks of AI systems that impact the safety and human and civil rights of the public;
  • Earlier this month, OMB issued further guidance about responsible government procurement of AI, directing agencies to ensure that third-party vendors implement needed safeguards for any AI product purchased by the government; and
  • Just last week, the White House issued a memo and framework on the use of AI in national security contexts.

Pursuant to the EO, individual agencies also released much-needed guidance to help companies institute common-sense governance measures that protect consumers from AI risks:

  • The Department of Labor released best practices for developers and employers to protect workers’ legal rights, establish AI governance (and human oversight), ensure transparency, and protect proper use of worker data when companies use AI;
  • The Department of Education published a toolkit for safe, equitable AI integration in K-12 public schools, detailing the long-standing civil rights laws that apply when districts acquire and use AI systems; and
  • As a part of its March guidance to Feds, OMB required agencies to hire chief AI officers (CAIO) and establish a CAIO council.

“The federal government has an obligation to ensure that industry practices and its own conduct don’t harm people’s rights, and that tax dollars are used effectively and responsibly. Through this order – and the work that has followed over the past year – the Administration has taken significant steps towards making that a reality when it comes to artificial intelligence,” said Alexandra Reeve Givens, president and CEO of the Center for Democracy & Technology.

“The work builds on bipartisan progress that’s been made in recent years – and it rightly acknowledges that sustainable AI innovation can only be achieved with appropriate governance and guardrails,” Givens said.

Notably, NIST’s AISI was stood up immediately following Biden’s AI EO and last week the administration took actions to bolster the institute through its AI national security memo by formally designating the AISI as “industry’s primary port of contact” in the government.

Under NIST, the AISI will release critical AI guidance for industry and test frontier AI models ahead of their release.

“This one-year anniversary of the AI Executive Order isn’t the end of the road on AI governance – we’re still at the beginning,” said Givens. “The Biden Administration should be praised for its cross-cutting approach to advancing responsible AI across the many different areas where AI is (or soon will) be used. Regardless of who is next in the White House, this work must continue to ensure America remains a leader in AI innovation that is responsible, trustworthy, and respects people’s rights.”

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags