Skip to main content
U.S. Elections

DOJ cracks down on AI threats ahead of 2024 election

May 15

Share

Media Landscape

See who else is reporting on this story and which side of the political spectrum they lean. To read other sources, click on the plus signs below.

Learn more about this data

Left 6%

Center 94%

Right 0%

Bias Distribution Powered by Ground News

Federal prosecutors are increasing efforts to combat election-related crimes involving artificial intelligence. Deputy U.S. Attorney General Lisa Monaco announced that the Justice Department will seek tougher sentences for crimes using AI, including threats against election workers and voter suppression.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Artificial intelligence continues to be a significant concern as the 2024 election approaches. According to a new Elon University Poll, more than three-quarters of Americans believe AI abuses will impact the election’s outcome.

About 73% of Americans believe it’s “very” or “somewhat” likely AI will infiltrate and manipulate social media, including through the use of fake accounts and bots to spread misinformation.

Seventy percent suspect that AI-generated fake video and audio information will blur the lines between truth and deception. Meanwhile, 62% say AI will be used to convince certain voters to skip voting. Overall, 78% say at least one of these AI abuses will be used and over half think all of them are likely to happen.

Federal prosecutors are stepping up their efforts. Monaco announced on Monday, May 13, that a Justice Department task force will seek tougher sentences for crimes where AI is used, including threats against election workers and voter suppression.

“These advanced tools are providing new avenues for bad actors to hide their identities and obscure sources of violent threats,” Monaco said. “They’re providing new avenues to misinform and threaten voters through deepfakes, spreading altered video or cloned audio impersonating trusted voices. And they’re providing new avenues to recruit and radicalize with incendiary social media content that accelerates online hate and harassment.”

The policy change aims to address the growing challenges posed by AI tools in the lead-up to the 2024 presidential election. These tools can easily mimic politicians’ voices and likenesses, spreading false information more effectively. The new guidelines will target cases where AI makes these crimes more dangerous and impactful.

A recent example involved an AI-generated robocall in New Hampshire imitating President Joe Biden and urging voters to skip the primary. The robocall was created by a New Orleans magician for a political consultant working for Minnesota Rep. Dean Phillips, a Democratic challenger to Biden.

U.S. officials are also concerned about foreign adversaries using AI to spread election disinformation. In December, senior officials simulated a scenario in which Chinese operatives created an AI-generated video showing a Senate candidate destroying ballots.

“Fulfilling that charge means confronting the full range of threats to our elections,” Attorney General Merrick Garland said. “That includes continuing our work through this task force, our U.S. Attorney’s Offices and our FBI offices across the country to investigate, disrupt and combat unlawful threats against those who administer our elections.”

The Justice Department faces pressure from election officials to investigate the surge of threats and harassment they have received, many of which stem from false claims of fraud in the 2020 election. In March, an Ohio man was sentenced to over two years in prison for making death threats to an Arizona election official.

Tags: , , , , , ,

LISA MONACO
DEPUTY ATTORNEY GENERAL

“Our democratic process and the public servants who protect it have been under attack like never before. As threats evolve and spread today, those threats are being supercharged by advanced technologies.

[LAUREN TAYLOR]

ARTIFICIAL INTELLIGENCE CONTINUES TO BE A SIGNIFICANT CONCERN AS THE 2024 ELECTION APPROACHES. ACCORDING TO A NEW Elon University Poll , MORE THAN THREE-QUARTERS OF AMERICANS BELIEVE AI ABUSES WILL IMPACT  THE ELECTION’S OUTCOME.

73% of Americans believe it’s “very” or “somewhat” likely AI will infiltrate and manipulate social media, including through the use of fake accounts and bots to spread misinformation.

70% suspect that AI-generated fake video and audio information will blur the lines between truth and deception.

62% say AI will be used to convince certain voters to skip voting.

78% say at least one of these AI abuses will be used, and more than half think all of them are likely to happen.

AI COULD BE USED FOR GOOD, BUT AMERICANS, BY AN EIGHT-TO-ONE MARGIN, BELIEVE IT WILL HARM ELECTIONS RATHER THAN HELP.

AS THE STAKES RISE, SO DO THE CONSEQUENCES.

FEDERAL PROSECUTORS ARE INCREASING THEIR EFFORTS TO COMBAT ELECTION-RELATED CRIMES INVOLVING A-I. DEPUTY U.S. ATTORNEY GENERAL LISA MONACO ANNOUNCED THIS WEEK A JUSTICE DEPARTMENT TASK FORCE WILL SEEK TOUGHER SENTENCES FOR CRIMES WHERE AI IS USED, INCLUDING THREATS AGAINST ELECTION WORKERS AND VOTER SUPPRESSION.

LISA MONACO
DEPUTY ATTORNEY GENERAL
“These advanced tools are providing new avenues for bad actors to hide their identities and obscure sources of violent threats. They’re providing new avenues to misinform and threaten voters through deepfakes, spreading altered video or cloned audio, impersonating trusted voices. And they’re providing new avenues to recruit and radicalize with incendiary social media content that accelerates online hate and harassment.”

[LAUREN TAYLOR]

THIS POLICY CHANGE AIMS TO ADDRESS THE GROWING CHALLENGES POSED BY AI TOOLS IN THE LEAD-UP TO THE 2024 PRESIDENTIAL ELECTION. THESE TOOLS CAN EASILY MIMIC POLITICIANS’ VOICES AND LIKENESSES, SPREADING FALSE INFORMATION MORE EFFECTIVELY. THE NEW GUIDELINES WILL TARGET CASES WHERE AI MAKES THESE CRIMES MORE DANGEROUS AND IMPACTFUL.

A RECENT EXAMPLE INVOLVED AN AI-GENERATED ROBOCALL IN NEW HAMPSHIRE, IMITATING PRESIDENT JOE BIDEN AND URGING VOTERS TO SKIP THE PRIMARY. THE ROBOCALL WAS CREATED BY A NEW ORLEANS MAGICIAN FOR A POLITICAL CONSULTANT WORKING FOR MINNESOTA REP. DEAN PHILLIPS, A DEMOCRATIC CHALLENGER TO BIDEN.

MERRICK GARLAND
ATTORNEY GENERAL
“Fulfilling that charge means confronting the full range of threats to our elections. That includes continuing our work through this task force, our U.S. Attorney’s offices, and our FBI offices across the country to investigate, disrupt, and combat unlawful threats against those who administer our elections.”

[LAUREN TAYLOR]

U.S. OFFICIALS ARE ALSO CONCERNED ABOUT FOREIGN ADVERSARIES USING AI TO SPREAD ELECTION DISINFORMATION. IN DECEMBER, SENIOR OFFICIALS SIMULATED A SCENARIO IN WHICH CHINESE OPERATIVES CREATED AN AI-GENERATED VIDEO SHOWING A SENATE CANDIDATE DESTROYING BALLOTS.

THE JUSTICE DEPARTMENT FACES PRESSURE FROM ELECTION OFFICIALS TO INVESTIGATE THE SURGE OF THREATS AND HARASSMENT THEY HAVE RECEIVED, MANY OF WHICH STEM FROM FALSE CLAIMS OF FRAUD IN THE 2020 ELECTION. IN MARCH, AN OHIO MAN WAS SENTENCED TO OVER TWO YEARS IN PRISON FOR MAKING DEATH THREATS TO AN ARIZONA ELECTION OFFICIAL.

TO STAY UP TO DATE ON 2024 ELECTION COVERAGE DOWNLOAD THE STRAIGHT ARROW NEWS APP OR VISIT US AT SAN DOT COM.

FOR STRAIGHT ARROW NEWS, I’M LAUREN TAYLOR.