Artificial intelligence (AI) capabilities are advancing rapidly. Today, government agencies are using AI to identify fraud and cybersecurity threats, automate manual tasks, improve healthcare outcomes and disaster response, enable predictive maintenance, and much more. Stanford’s “Artificial Intelligence Index Report 2023” says that from 2021 to 2022, total AI spending within the Federal government increased from $2.7 billion to $3.3 billion.

Despite this progress, as Federal agencies adopt AI, they face mountains of data, an uncertain workforce, and difficulty moving from pilot to enterprise integration. MeriTalk recently sat down with Kathleen Featheringham, vice president of artificial intelligence and machine learning at Maximus, to discuss these issues and how Federal agencies can successfully move AI use cases from concept to design, pilot, and scalable enterprise deployment to achieve mission success in area ranging from citizen services to national security.

MeriTalk: Across the Federal government, agencies are pursuing AI implementations. Many people use AI in their daily lives, even if they don’t realize it. How far away are we from a reality in which the majority of people use AI at work – in general and in the Federal government specifically?

Featheringham: That reality isn’t that far off. As you noted, many people are already using it. They just may not be aware that they are because the definition of AI is very broad. The real question is not when people will use AI at agencies, but to what degree. The government will be a key driver in AI innovation and integration, including the creation of policies and regulations with guardrails that protect human interests and potential impacts. The government is actively seeking industry involvement, as it’s important to include the creators of the technology when developing regulations so that you don’t unnecessarily inhibit innovation.

MeriTalk: There are many unknowns around AI, and, understandably, some employees are concerned that their jobs will change substantially because of it. You’re a change management expert. How have you worked with Federal agencies to adjust expectations about what AI can and can’t do, and to help workers understand how AI will affect them?

Featheringham: It’s human nature to be fearful of change, especially in circumstances where it’s uncertain how the change will impact them. But as with any change, education is important – understanding on what AI can and can’t do and how it can help people do their jobs. AI is not a logical being like humans who have critical thinking skills; it’s algorithms created from math and code that are designed to do certain things.

For example, government analysts need to analyze data quickly to derive insights for the mission. In many cases, a lot of manual effort is needed to go through all the data, and that can be really time-consuming. AI can be leveraged to compile and summarize, and the analysts will have more time to apply their critical thinking skills to get better insights to achieve mission outcomes. When you connect that with making the citizen experience better, AI can analyze massive amounts of data to empower agencies with the ability to provide more personalized and faster services.

The design of AI also plays a role in acceptance because design impacts experience, and that includes customers interacting with AI tools. A poorly designed tool will not be easily accepted by users. To really gain acceptance of the use of AI, leaders need to ensure that the human experience is always at the forefront and design and use of AI models, and that AI tools are created with human-centered design (HCD) principles so teams truly understand how humans will meaningfully interact with them.

MeriTalk: What advice do you have for agencies as they plan for AI and consider their mission requirements, the employee experience, and the user experience?

Featheringham: Whether you are designing AI for employees or for citizens accessing services, it is crucial to design for outcomes. This is true of all technology. If you don’t design for outcomes, then the person using the AI will have a bad or ineffective experience.

This circles back to HCD, which should be utilized at the start of any AI development. Through this process, designers will really understand what AI will be used for, how the stakeholder will interact with it, and what outcomes the stakeholder will expect when using it. It’s not just about writing the right code or ensuring the right data sets are included – it’s understanding the workflow, the desired outcomes you are trying to achieve, and the experience from a user perspective.

MeriTalk: How should agencies plan ahead, so they’re preparing for eventual enterprise-scale AI, even at the pilot stage?

Featheringham: We’ve touched on designing for outcomes. That is critical for AI success. Another key element is to design and build AI in either the production environment or an environment that mirrors production. From the research and development stage through pilot, AI needs to be designed, monitored, tested, and validated in an ecosystem and environment that represents the real one. Doing it this way increases the chances of being able to move AI out of the lab and into operations safely. The learning is more authentic and gives better insight on how AI will behave when it moves into operation and runs on live data and scenarios. To achieve a seamless transition from pilot to enterprise deployment at scale, it needs to be built in an environment that behaves just like the environment where it will ultimately live.

MeriTalk: To be successful, AI implementations need enough data – and that data must be trustworthy and timely. How should agencies evaluate their data sources to know when they have the right data, and enough of it?

Featheringham: Trust is an inherently human quality, and it means different things to different people, so data trustworthiness could mean different things to different people. Rather than talking about data trustworthiness, we should consider the appropriateness and source of the data and how it will be used. For example, if you find data from a survey about office work from the 1990s, that data may not be appropriate to use when designing AI to support today’s office environment. It’s important to really know the lineage and pedigree of the data. How was it collected? If it was a survey, where the questions developed with best practices in mind? What are the properties of the data? Has the data been properly secured, or could it have been tampered with? Creating good AI models depends on fully understanding where the data sets came from.

MeriTalk: Tell us a bit about Maximus’ AI and advanced analytics practice. How do you help agencies evaluate opportunities for AI, build AI solutions, and successfully deploy them?

Featheringham: Due to our work with large enterprises, Maximus sits at the intersection of data, technology, and experience to help agencies build the tools, software, and systems that will have real and immediate mission impact. From our early days, we have focused on business process services that have always included taking in large amounts of data, understanding the customer’s needs, and getting them positive results quickly. We have taken that legacy and expanded further.

As an example of our AI and data analytics work, we’ve been a longstanding partner of the Centers for Disease Control and Prevention (CDC). During the COVID-19 pandemic, the CDC had a vaccination hotline to provide critical information and answer questions on topics such as vaccines. Through our AI and advanced analytics tools, we were able to determine there were reasons other than vaccine hesitancy that were stopping individuals from getting the vaccine, including logistical issues. People needed rides to vaccination clinics. They needed childcare while they were getting vaccinated. Those issues could be resolved by connecting them to local services. With this knowledge, the CDC found ways to solve these problems and fulfill the mission.

For Maximus, it’s not about getting from A to B with a project. We take a holistic approach to achieve meaningful outcomes that often includes integration with cyber, digital, and cloud to support agencies on their journey to achieve mission success.

Read More About