Skip to main content
U.S. Elections

OpenAI: Cyber actors exploiting ChatGPT to influence elections

Share

OpenAI identified and disrupted more than 20 attempts to use its artificial intelligence models to generate fake content aimed at influencing elections worldwide, the company revealed in a report published Wednesday, Oct. 9. The report highlights how cyber actors, including state-linked entities, have exploited OpenAI’s tools, such as ChatGPT, to create AI-generated articles, social media posts and comments intended to manipulate public opinion.

The 54-page report details efforts to spread misinformation in elections across the United States, Rwanda, India and the European Union.

In one case, an Iranian operation in August used OpenAI’s models to generate long-form articles and comments related to the U.S. election.

OpenAI also took action in July to ban ChatGPT accounts in Rwanda that were involved in posting election-related comments on social media platform X.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Despite these efforts, OpenAI stated that none of the operations were able to gain viral traction or build lasting audiences. The company said it acted quickly to neutralize the attempts, often resolving the issue within 24 hours of detection.

The report comes as concerns grow over the potential use of AI-generated content to interfere in upcoming elections.

According to the U.S. Department of Homeland Security, foreign actors, including Russia, Iran and China, are expected to attempt to use AI to influence the U.S. presidential election.

OpenAI emphasized the need for greater awareness and vigilance as generative AI becomes more widely adopted. The report noted a significant increase in the creation of deepfakes and other AI-generated content.

There was a 900% rise in such material over the past year, according to data from Clarity, a machine learning firm.

Tags: , , , , , , , , ,

[Craig Nigrelli]

CYBER ACTORS ARE EXPLOITING CHATGPT TO INFLUENCE ELECTIONS WORLDWIDE, USING AI-GENERATED CONTENT TO SPREAD MISINFORMATION AND SWAY PUBLIC OPINION. OPENAI HAS DISRUPTED MORE THAN 20 OPERATIONS WHERE ITS AI TOOLS WERE USED TO CREATE FAKE CONTENT AIMED AT INFLUENCING GLOBAL ELECTIONS, ACCORDING TO A REPORT PUBLISHED BY THE COMPANY.

THE 54-PAGE DOCUMENT OUTLINES HOW CYBER ACTORS, INCLUDING STATE-LINKED ENTITIES, HAVE INCREASINGLY TURNED TO AI-GENERATED CONTENT, TO MISLEAD THE PUBLIC AND MANIPULATE ELECTION DISCOURSE.

THE REPORT HIGHLIGHTS THAT THESE OPERATIONS TARGETED ELECTIONS IN THE U.S., RWANDA, INDIA, AND THE EUROPEAN UNION, AMONG OTHERS. OPENAI FLAGGED DECEPTIVE ACTIVITIES INVOLVING AI-GENERATED SOCIAL MEDIA POSTS, FAKE ACCOUNTS, AND WEBSITE ARTICLES, WHICH WERE PART OF BROADER ATTEMPTS TO INFLUENCE THE POLITICAL LANDSCAPE IN THESE REGIONS.

ONE INCIDENT INVOLVED AN IRANIAN OPERATION IN AUGUST, WHICH USED OPENAI’S PRODUCTS TO GENERATE CONTENT ABOUT THE U.S. ELECTION. IN ANOTHER CASE, AN ISRAELI COMPANY USED CHATGPT TO GENERATE SOCIAL MEDIA POSTS ABOUT ELECTIONS IN INDIA, WHICH OPENAI MANAGED TO ADDRESS IN LESS THAN 24 HOURS.

THE COMPANY ALSO REVEALED THAT IT HAD BANNED A NUMBER OF ACCOUNTS FROM RWANDA IN JULY FOR GENERATING ELECTION-RELATED CONTENT FOR SOCIAL MEDIA PLATFORM X.

DESPITE THE RISE IN AI-DRIVEN ELECTION MISINFORMATION, OPENAI SAYS NONE OF THE IDENTIFIED EFFORTS GAINED VIRAL TRACTION OR BUILT LASTING AUDIENCES. THE COMPANY ACTED QUICKLY, NEUTRALIZING ATTEMPTS SUCH AS AN AUGUST IRANIAN OPERATION TARGETING THE U.S. ELECTION. OPENAI SAYS MOST INTERVENTIONS WERE EFFECTIVE WITHIN 24 HOURS, PREVENTING SIGNIFICANT IMPACT.

THE REPORT COMES AHEAD OF THE U.S. PRESIDENTIAL ELECTION AND AT A TIME WHEN MORE THAN 40 COUNTRIES, REPRESENTING OVER 4 BILLION PEOPLE, ARE HOLDING ELECTIONS. WITH DEEPFAKES AND AI-GENERATED CONTENT ON THE RISE, LAWMAKERS AND OFFICIALS ARE INCREASINGLY FOCUSED ON THE POTENTIAL FOR AI TOOLS TO SPREAD MISINFORMATION.

ACCORDING TO THE U.S. DEPARTMENT OF HOMELAND SECURITY, THERE IS GROWING CONCERN ABOUT THE ROLE OF FOREIGN ACTORS, PARTICULARLY FROM RUSSIA, IRAN, AND CHINA, IN USING AI TO INFLUENCE THE NOVEMBER  ELECTION.

FOR MORE OF OUR UNBIASED — STRAIGHT FACTS REPORTING — DOWNLOAD THE STRAIGHT ARROW NEWS APP OR VISIT US AT SAN – DOT – COM.