The National Geospatial-Intelligence Agency (NGA) is launching a new program designed to elevate the standards and reliability of artificial intelligence (AI) models in geospatial intelligence (GEOINT).
The initiative, dubbed the Accreditation of GEOINT AI Models (AGAIM), aims to improve the performance and accountability of AI technologies within the National System for Geospatial Intelligence (NSG).
“AGAIM will expand the responsible use of GEOINT AI models and posture NGA and the GEOINT enterprise to better support the warfighter and create new intelligence insights,” the GEOINT agency explained in a press release.
The accreditation efforts aims to provide a standardized evaluation framework, implement risk management, promote a responsible AI culture, enhance AI trustworthiness, accelerates AI adoption and interoperability, recognize high-quality AI, and identify areas for improvement.
NGA, a key player in U.S. national security, has a long history of leveraging computer vision and machine learning to manage and analyze vast amounts of satellite imagery data.
Advances in structured observation management, automated reporting, and modeling have significantly enhanced the agency’s analytical capabilities. One of NGA’s key missions today is overseeing the AI development pipeline for the military’s advanced computer vision program, Maven.
The agency’s AI capabilities are integrated into platforms such as the Analytic Services Production Environment for the NSG (ASPEN) and Maven. NGA has also ensured that its AI models are compatible with various machine learning platforms to enhance “versatility and innovation in GEOINT technologies.”
In addition to AGAIM, NGA is rolling out a new training program, GEOINT Responsible AI Training (GREAT), to educate coders and users on the ethical use of AI.
The GREAT program, which held pilot classes in April and May, aims to address the specific challenges faced throughout the AI lifecycle. Participants will be required to sign a pledge committing to the responsible development and use of AI.
The training initiative aligns with the Department of Defense’s guidance on ethical AI, promoting a culture of responsibility and transparency in AI practices.