While generative AI technology has come a long way in the past couple of years, Federal AI experts explained today that the technology is not yet ready for the government to “fire and forget” the technology – meaning it still requires a human-in-the-loop after deployment.

At GovLoop’s AI in Action event in Washington, D.C., today, one AI expert explained that the key to successful AI adoption within the Federal government starts with purchasing simple, ubiquitous tools that are easy for everyone to use.

“I don’t think that government should be asking the question: ‘Is the workforce ready for AI?’” said Brian Morrison, former large language model specialist at the Department of the Air Force Chief Data and AI Office. “It really should be: ‘Is the technology ready for the workforce?’”

Morrison explained that this question helps to turn a training problem into an acquisition problem. For example, he said no one ever got Microsoft Office training because the tool is “so easy to use.”

“I simply look for a tool that employs the technology in a way that is so simple, anyone can use it,” he said. “It’s not about taking these giant leaps in capability by employing alien space lasers or something. It’s about taking small stair steps that I can afford as an organization bit by bit as risk tolerance grows time over time, and then they’re just used.”

U.S. Navy Capt. Manuel Xavier Lugo, Task Force Lima commander within the Chief Digital and Artificial Intelligence Office (CDAO), added that GenAI at its current state is not quite there yet.

The Department of Defense established Task Force Lima, which is aimed at assessing and integrating generative AI capabilities across the Pentagon, in August 2023.

“If there’s a takeaway from this conversation, it’s that this technology is not ready for ‘fire and forget.’ This technology requires humans in the loop at its current state,” Lugo said. “It must have a human – and even better if it’s a subject matter expert inserted in that particular case.”

Lugo explained that GenAI still is not always accurate and requires a human to fact-check it. Additionally, he said that there is still no concrete way to measure the amount of bias in a model or algorithm. “We’re not there. We don’t know,” he said.

“I don’t know that we’ll get to a percentage of confidence,” Lugo added. “But by the same token, if I have a second lieutenant give me a response in a high-threat situation, I’ll take it as ‘this comes from a second lieutenant.’ So, you have to consider that in your decision matrix.”

Lugo concluded by giving the audience a piece of advice when it comes to AI: “Don’t be scared, be skeptical.”

“This technology has a lot of potential but also know, learn, and understand what its challenges are so you can you can use that in your decisions,” he said.

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags