The General Services Administration’s (GSA) Federal Risk and Authorization Management Program (FedRAMP) is looking into using AI technologies to reduce the time necessary to review and assess authorization packages.

The 12-year-old FedRAMP program is administered by GSA and provides a standardized, government-wide approach to security assessment, authorization, and continuous monitoring for cloud products and services used by Federal government agencies.

“One item that we want to focus on in reducing is how much time things spend in process,” Ryan Palmer, a senior technical and strategic advisor at FedRAMP, said on Tuesday during the AI/Automation Workshop hosted by Nextgov/FCW. “We oftentimes see packages go through up to three rounds of review.”

“So, our reviewers will review a package, they’ll find issues, have questions, find potential areas that need additional clarity, and that goes back to the CSP [cloud service provider]. The CSP might do some updates, it goes back to the reviewers. That cycle – and going through that cycle multiple times – delays time to authorization,” he explained. “That means CSPs and agencies don’t have access to those cloud services as quickly as they would like and we would like to provide. So, we want to get through that in-process stuff as quickly as possible.”

Palmer hopes AI can help to prevent multiple rounds of review and “address those questions earlier on in the process.”

“Ideally, most issues have already been detected, identified, and resolved prior to getting into that in-process step. We want to decrease the time to authorization in general,” he said.

Therefore, Palmer said the FedRAMP team can focus on “the most important controls” and package areas.

Architecturally, Palmer said the FedRAMP team could implement this with two different large language models (LLMs). One, he said, could be a private LLM that’s trained on reviewer feedback data. The other, he said, could be a public LLM that’s trained on published FedRAMP policy and guides, knowledgebase articles, and examples of actual implementations, among other public resources.

For example, if CSPs were looking to get feedback about a specific control through a user interface or API calls, the public LLM could point them to “a certain white paper that is relevant to the control,” Palmer said.

As for the private LLM, Palmer said it might produce feedback based on what reviewers have frequently said, such as “Reviewers frequently ask for more technical detail on this control response.”

“So, we’re providing some additional content to the CSP early on in the process that might aid them in meeting expectations and addressing PMO [Program Management Office] reviewer comments,” Palmer said. “Note that all this wouldn’t have to be direct use of LLMs, right? We could have LLMs summarize content and this could be provided in a lot of different ways.”

“We’re really looking at AI as a way to give them additional information and to speed that review process,” he added. “I’m really excited about the potential of AI in FedRAMP.”

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags