The Veterans Health Administration’s (VHA’s) use of generative artificial intelligence (AI) chat tools for clinical care presents “a potential patient safety risk,” according to a new analysis from the Department of Veterans Affairs (VA) Office of Inspector General (OIG).

The Jan. 15 review reveals that “VHA does not have a formal mechanism to identify, track, or resolve risks associated with generative AI,” raising concerns about how patient safety is protected as the technology is deployed in clinical settings.

According to the OIG, VHA’s AI efforts for health care are driven through “an informal collaboration” between the acting director of VA’s National AI Institute and the chief AI officer within the VA’s Office of Information and Technology. The OIG said those officials did not coordinate with the National Center for Patient Safety when authorizing AI chat tools for clinical use.

The memo notes that the VHA authorizes two AI chat tools for use with patient health information: Microsoft 365 Copilot Chat and VA GPT, which is a newly launched internal chat tool designed for VA employees.

While the tools are intended to support clinical decision-making, the OIG cautioned that GenAI systems can produce inaccurate or incomplete outputs, including omissions that could affect diagnoses or treatment decisions.

“The OIG is concerned about VHA’s ability to promote and safeguard patient safety without a standardized process for managing AI-related risks,” the review says. “Moreover, not having a process precludes a feedback loop and a means to detect patterns that could improve the safety and quality of AI chat tools used in clinical settings.”

“Given the critical nature of the issue, the OIG is broadly sharing this preliminary finding so that VHA leaders are aware of this risk to patient safety,” it adds.

The VA OIG’s review is ongoing, so it did not issue any formal recommendations. The OIG said it will continue to engage with VHA leaders and include an in-depth analysis of this finding, along with any additional findings, in its final report.

The OIG’s analysis aligns with findings in a recent Kiteworks report, which warns that government agencies are operating in 2026 without the operational governance needed to manage AI safely.

Kiteworks surveyed governments globally and found that just 10% of governments have centralized AI governance. One-third have no dedicated AI controls, , and 76% lack automated mechanisms to shut down or revoke high-risk AI systems, and 76% lack AI anomaly detection.

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags