The Department of Homeland Security (DHS) is warning of the “peril” of generative AI as malicious use of the emerging technology works to increase the cyber threat to U.S. networks in 2025 and beyond.

DHS released its annual threat assessment for 2025 on Wednesday, highlighting that generative AI will have the “unintended consequence of adding layers of complexity to the threats we face.”

“In 2025, we expect malicious cyber actors will continue to use advancements in generative AI to incrementally enhance their ability to develop malware, vulnerability scanning, and exploit tools and to improve their social engineering tactics and operations,” DHS said.

The threat assessment warned that generative AI is helping threat actors generate, manipulate, and disseminate synthetic media, such as deepfakes, making it a particularly useful tool for malign influence campaigns. Specifically, DHS said it expects China and Russia to continue to use generative AI to “improve the scale and scope of their influence activities that target US audiences.”

“Adversarial states will continue to use AI in their malign influence campaigns as the technology lowers technical thresholds and improves adversaries’ abilities to effectively personalize and scale more credible messages for target audiences,” the document says.

DHS also warned that foreign terrorist organizations and domestic violent extremists will use the technology “to support their operations, messaging, and recruitment.”

The document notes that the recent surge of publicly available generative AI technologies has given malign cyber actors easily accessible tools that enhance the speed and scale of malicious cyber tactics, cutting the time and resources needed to generate phishing e-mails and conduct vulnerability scanning.

“As AI enables an increase in the scope of an already rising volume of cyber attacks, further automation of organizational cybersecurity efforts may help to more effectively counter persistent attacks and reduce the strain on human network defenders,” DHS said.

Finally, DHS warned that threat actors have demonstrated the ability to get around rules that are meant to keep generative AI chatbots from returning dangerous information, such as bomb-making instructions or sensitive data.

“This may have implications for US entities charged with securing personally identifiable information, especially sectors like healthcare and financial services,” the threat assessment says. “Threat actors have also manipulated open?source data used to train AI chatbots and AI models, which could lead to biased or erroneous outputs, highlighting the potential hazard of integrating AI into critical systems and operations.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags