The U.S. Air Force (USAF) faces growing challenges in artificial intelligence (AI) adoption as systems become more powerful but less explainable, according to the service’s innovation chief.

Joe Chapa, USAF’s director of innovation, spoke at SAP’s Public Sector Summit in Washington, D.C., on Dec. 16, emphasizing the need to build trust through leadership, culture, and risk management rather than relying solely on technical transparency.

From the Air Force’s perspective, Chapa said those challenges are rooted in the rapid evolution of modern AI systems.

Chapa said modern AI systems, especially those based on deep learning, have become increasingly complex as they rely on vast amounts of data, computing power, and multiple layers of artificial neurons. While that complexity has improved performance, it has also created “black box” systems whose decision-making processes cannot be fully understood by humans.

“When those models started to become deeper … the math behind the way that those models arrived at an output became harder to do,” Chapa said. “Once the scale of those math problems exceeded the capacity of humans, there is an inherent explainability problem.”

Researchers initially developed explainable AI tools to make such systems more transparent, Chapa said, but the rapid rise of generative AI caused model complexity to grow faster than those tools could keep up. As a result, requiring full explainability would effectively prevent organizations from using many of today’s most advanced AI systems.

“We wouldn’t be able to use any of the generative AI tools of the last five years or so,” he said. “How do we get to trustworthy AI, given that constraint? That’s a hard problem.”

Rather than full explainability, Chapa said organizations must rely on governance, guardrails, and risk management to establish trust. He added that innovation and responsible AI principles – including integrity, safeguards, and oversight – can coexist, but only if leadership aligns incentives across large institutions.

Chapa described tension between innovation leaders, who are rewarded for moving quickly, and cybersecurity leaders, who are incentivized to prevent failure by slowing adoption. He said this dynamic has repeatedly delayed efforts to deploy generative AI tools within the USAF.

“The solution to that problem is for leaders to accept more risk,” Chapa said, adding that risk should be recognized and mitigated rather than ignored, with senior leaders ultimately owning those decisions.

He said policies and oversight alone are insufficient to ensure responsible AI use. Instead, organizations must see observable changes in behavior that reflect both innovation and safeguards in practice.

Chapa also argued that successful AI adoption is primarily a people and culture challenge, not a technical one. He described how hidden use of generative AI can undermine trust among employees and between experts and users.

“We have a little bit of a fear around being found out that you use generative AI,” he said. “That is the opposite culture of the culture that we would want to build.”

A healthy AI culture, Chapa said, is one in which employees openly discuss when and how they use AI tools and remain accountable for the outcomes. He said an organization is “winning at AI” not when tools are widely deployed, but when trust is built through transparency and shared norms.

“It’s not trust in the systems,” Chapa said. “It’s trust between the people.”

Read More About
About
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags