As the use of artificial intelligence technologies continues to grow, Federal and industry officials explained on Tuesday the importance of industry-led regulation development, workforce training, and creating limited-purpose AI applications as three ways to further the cause of making sure that AI tech is used safely and responsibly.  

Prioritizing those high-level ethical considerations, the officials said during an Oct. 1 panel discussion at the Chief Digital Officer Summit in Washington, will help address current and future risks and challenges of AI implementation. 

In the nearer term, panelists said, unshrouding the mysteries of AI and using good data to create AI outputs are key considerations.  

“It’s like a child with a boogeyman under the bed, and you just don’t understand it,” said Paul Evangelista, the chief data officer at the U.S. Military Academy at West Point, about educating on the limitations of AI. “You’re imagining what it can potentially do, as opposed to making decisions, intelligent decisions about risk.” 

Delester Brown, chief data officer for the National Guard, echoed Evangelista’s sentiment on the importance of educating the workforce to achieve responsible AI use, remarking, “I can’t be the only one that has a responsibility in how we interact with the data.” 

In addition to educating the workforce, panelists called for less focus on the AI itself and more focus on data use.  

“Without analytics, there’s no AI, right?” said Thomas Sasala, deputy director of the U.S. Army’s Office of Enterprise Management. “If you’re the public sector [or] private sector, don’t focus on the AI as much as the analytics and the use of the data in the business of your business.” 

While panelists agreed that AI is not yet powerful enough to pose major threats on its own right now, they looked to the future and talked about regulation and oversight that may come from industry as AI tech continues to advance.  

Mark Brady, deputy chief data officer for the Test Resource Management Center at KBR, Inc., talked about “six laws of safe AI” which focus on creating specific-use AI, developing and maintaining safety and objective functions in AI evolution, and providing mechanisms for consistent human oversight.  

“What do emotions do for humans? Emotions are what motivates us through work,” Brady said in discussing minimizing the risks of AI as its applications become more powerful. “In AI, you also have something akin to that, it’s called an objective function.” 

“However, there’s also the intent to the objective of AI, I could have one AI that has bad intent, and I could have another one which is very safe and has good intent,” he said, adding that future regulation will need to focus on promoting good AI use and intent, and possibly employ “defensive AI” to detect and counter AI that has bad intent.  

Read More About
About
Weslan Hansen
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags