As voters prepare to head to the polls this November, the Federal government and other influential people – most recently, Taylor Swift – are warning voters about the risks of artificial intelligence and the emerging technology’s role in election-related misinformation.

This election marks the first presidential race in which deepfakes – which use a form of AI called deep learning to create fake images or videos – have become mainstream. Government agencies such as the FBI, National Security Agency (NSA), and Cybersecurity and Infrastructure Security Agency (CISA) have warned that deepfakes are a top political threat.

In a Sept. 10 Instagram post officially endorsing Vice President Kamala Harris for president, Swift wrote that her endorsement was partly driven by former President Donald Trump sharing an AI-generated image of her.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift said.

“It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth,” the singer added.

Swift is likely referring to AI-generated images that Trump shared on Truth Social last month depicting women wearing “Swifties for Trump” T-shirts. The former president’s post also featured an AI-generated image that shows Swift in an Uncle Sam-inspired image with the words, “Taylor wants you to vote for Donald Trump.” Trump captioned the post, “I accept!”

Swift is not the first to be depicted in a deepfake used for political messaging. Trump has also shared AI-generated images of himself, as well as his opponent Harris. Just last month, Trump shared an AI-generated image of what appeared to be Harris speaking at a communist rally.

Bad actors have created many convincing deepfakes of candidates to try to disrupt elections – such as the deepfake robocalls of President Joe Biden that voters received ahead of the New Hampshire primary in January.

The robocall featured an AI-generated voice of what sounded like President Biden advising New Hampshire residents not to vote in the presidential primary and to save their vote for the November general election.

Members of Congress have warned that these deepfakes and other AI-generated content have the ability to influence voters ahead of the election.

Last fall, senators introduced a bill that aims to ban the use of AI to generate deceptive content to influence Federal elections.

Additionally, members of the House’s bipartisan Task Force on AI introduced a bill in March that aims to protect Americans from AI-generated content during the 2024 election cycle by setting standards for identifying AI content – such as watermarking.

While no such bill has yet become law, Federal agencies do have resources for voters and organizations looking for the best way to identify AI-generated content and misinformation.

In their fact sheet published last fall, the FBI, NSA, and CISA recommended that one of the best ways to spot AI-generated content is to utilize technologies that can detect deepfakes and determine the media’s origin.

For example, the agencies said Microsoft introduced the Microsoft Video Authenticator prior to the 2020 elections. In 2023, the company also rolled out the “About this Image” tool to get more context on the authenticity of images.

Earlier this year, CISA Director Jen Easterly testified before Congress on the threat AI poses to the 2024 elections. Easterly said CISA “continues to provide guidance on the tactics used by adversaries,” and it maintains and develops resources to protect and support state and local election officials, such as the ‘Rumor vs. Reality’ website to combat false narratives.

“I have confidence in the integrity of our elections, and the American people should as well,” Easterly said. “[Our] election infrastructure has never been more secure.”

“Foreign adversaries have targeted U.S. elections in the current and previous elections cycles and we expect these threats to continue,” Cait Conley, a senior advisor at CISA, said on Monday in a statement to MeriTalk. “CISA provides state and local election officials, and the public, with guidance on foreign adversary tactics and mitigations, including the risks from generative AI.”

“We maintain a Rumor vs. Reality webpage on our #Protect2024 site to bolster voter education on infrastructure security issues and civic literacy,” Conley continued. “And we emphasize that the authoritative sources for election information are state and local election officials.”

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags