Skip to main content
Tech

Generative AI threatens 2024 elections; false Israel-Hamas images spread

Share

Media Landscape

See who else is reporting on this story and which side of the political spectrum they lean. To read other sources, click on the plus signs below. Learn more about this data
Left 33% Center 33% Right 33%
Bias Distribution Powered by Ground News

The tech world is preparing for what some say has already begun disrupting democratic processes: Artificial intelligence. More specifically, the focus is on generative AI, a type of AI that creates fake, but convincingly realistic images, audio and text.

At the Reuters NEXT conference in New York during the week of Nov. 5, AI entrepreneur and professor emeritus of psychology and neural science at New York University, Gary Marcus, emphasized that the peril AI poses to democracy stands out as the most substantial risk.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

“There are a lot of elections around the world in 2024, and the chance that none of them will be swung by deepfakes and things like that is almost zero,” Marcus said.

Politicians have been particularly vulnerable to these threats. Meta has taken preemptive measures by deciding to prohibit advertisers from utilizing its generative AI for political ads on Facebook and Instagram.

Starting next year, the use of third-party AI software for political, electoral, or social ads will require disclosure. Failure to comply may lead to ad rejection, and repeated violations could incur penalties.

While the detection of deepfakes has historically been imperfect, DeepMedia claims its product performs with 99% accuracy in its detection.

“The thing that makes our deepfake detection highly accurate, really fast and easy to use, is the fact that we both do generation and detection, these are kind of two sides to the same coin,” COO and co-founder Emma Brown said.

Brown cautioned against focusing solely on entirely fabricated content, noting instances where only a brief segment of a video is manipulated. She emphasized the difficulty in detecting such alterations, even for highly trained analysts, making it a critical concern.

“One thing that we’ve found is, you know, there are certain situations where only three seconds of a video are faked, and it might be a 20-minute video, and it might change the meaning of something,” Brown said. “But it’s only three seconds.”

Beyond the domestic effects, deepfakes are further complicating international issues.

“One of the things that we’re doing is we’re working directly with platforms to make sure that it’s integrated for all users,” Brown said. “And we’ve actually recently come out with a Twitter bot in response to Israel, Hamas.” 

Recent revelations about Adobe selling AI-generated images depicting scenes of war, including explosions and destroyed homes in Gaza, further underscore the challenges. Adobe used a label to indicate the images were generated with AI.

Experts, including Brown, anticipate that the prevalence of deepfakes will only increase, flooding social media platforms with more manipulated video and audio content.

Tags: , , , , , , , , , , ,

SHANNON LONGWORTH: WE’RE LESS THAN A YEAR OUT FROM THE 2024 ELECTION.

THE TECH WORLD IS GEARING UP FOR WHAT, SOME SAY, IS ALREADY DISRUPTING THE DEMOCRATIC PROCESS.

ARTIFICIAL INTELLIGENCE.

GENERATIVE AI, SPECIFICALLY. TECHNOLOGY THAT CREATES FAKE, BUT CONVINCINGLY REALISTIC IMAGES, AUDIO, AND TEXT.

AT THE REUTERS NEXT CONFERENCE IN NEW YORK THIS WEEK, NYU PROFESSOR GARY MARCUS SAID AI’S THREAT TO DEMOCRACY IS THE BIGGEST RISK.

 

GARY MARCUS:

“THERE ARE A LOT OF ELECTIONS AROUND THE WORLD IN 2024, AND THE CHANCE THAT NONE OF THEM WILL BE SWUNG BY DEEP FAKES AND THINGS LIKE THAT IS ALMOST ZERO.”

 

LONGWORTH:

POLITICIANS WILL CONTINUE TO BE TARGETS.

META IS BANNING ADVERTISERS FROM USING ITS GENERATIVE AI FOR POLITICAL ADS ON FACEBOOK AND INSTAGRAM.

USING THIRD-PARTY AI SOFTWARE FOR POLITICAL, ELECTORAL, OR SOCIAL ADS WILL REQUIRE DISCLOSURE STARTING NEXT YEAR. FAILURE TO DO SO MAY MEAN THE AD IS REJECTED, AND REPEATED VIOLATIONS COULD INCUR PENALTIES.

DETECTING DEEP FAKES HAS HISTORICALLY BEEN IMPERFECT; HOWEVER, WE SPOKE WITH DEEPMEDIA, WHICH CLAIMS TO DO SO WITH 99% ACCURACY.

 

EMMA BROWN:

“WE WORK WITH TOP TECH COMPANIES, AS WELL AS THE DOD TO DO DEEP FAKE DETECTION. SO THERE ARE A COUPLE OF STRONG USERS.”

 

LONGWORTH:

EMMA BROWN IS DEEPMEDIA’S COO AND CO-FOUNDER.

 

BROWN:

“THE THING THAT MAKES OUR DEEP FAKE DETECTION HIGHLY ACCURATE, REALLY FAST AND EASY TO USE, IS THE FACT THAT WE BOTH DO GENERATION AND DETECTION, THESE ARE KIND OF TWO SIDES TO THE SAME COIN. SO TO BE ABLE TO IDENTIFY WHAT A DEEP FAKE REALLY, REALLY WELL, YOU HAVE TO BE ABLE TO MAKE REALLY GOOD DEEP FAKES.”

 

LONGWORTH:

BROWN SAYS SOMETHING TO KEEP AN EYE OUT FOR IS NOT JUST WHEN AN ENTIRE PIECE OF CONTENT IS FAKED, BUT ONLY A PIECE OF IT.

 

BROWN:

“ONE THING THAT WE’VE FOUND IS, YOU KNOW, THERE ARE CERTAIN SITUATIONS WHERE ONLY THREE SECONDS OF A VIDEO ARE FAKED, AND IT MIGHT BE A 20-MINUTE VIDEO, AND IT MIGHT CHANGE THE MEANING OF SOMETHING. BUT IT’S ONLY THREE SECONDS. SO FIRST OF ALL, VERY HARD FOR A HUMAN TO PICK UP ON EVEN IF THEY’RE A HIGHLY TRAINED ANALYST, BUT ALSO, REALLY, REALLY IMPORTANT.”

 

LONGWORTH:

AND, BEYOND DOMESTIC CONCERNS, DEEP FAKES ARE FURTHER COMPLICATING INTERNATIONAL ISSUES.

 

BROWN:

“ONE OF THE THINGS THAT WE’RE DOING IS WE’RE WORKING DIRECTLY WITH PLATFORMS TO MAKE SURE THAT IT’S INTEGRATED FOR ALL USERS. AND WE’VE ACTUALLY RECENTLY COME OUT WITH A TWITTER BOT IN RESPONSE TO ISRAEL, HAMAS.”

 

LONGWORTH: JUST THIS WEEK, IT CAME OUT THAT ADOBE IS SELLING FAKE IMAGES – SCENES OF WAR DEPICTING EXPLOSIONS AND DESTROYED HOMES IN GAZA. SOME LOOK MORE CONVINCING THAN OTHERS, BUT THEY’RE NOT REAL.

EVEN AUTHENTIC CONTENT FACES CHALLENGES AS PEOPLE ARE SUSPICIOUS, UNSURE OF WHAT TO TRUST.

BROWN–AND OTHER EXPERTS– SEE THIS ISSUE ONLY BECOMING MORE COMMON, WITH MORE VIDEO AND AUDIO FLOODING THE SOCIAL MEDIA PLATFORMS WE USE EVERY DAY.