Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Commentary
Our commentary partners will help you reach your own conclusions on complex topics.
‘Biased’: What Americans think of ‘mainstream media’
Yesterday Dr. Frank Luntz‘Getting rid of them’: Americans discuss Trump and immigration
Feb 14 Dr. Frank Luntz‘Woke’: Why some Biden 2020 voters backed Trump in 2024
Feb 6 Dr. Frank Luntz‘A promise’: Cadets describe their journeys at West Point
Jan 10 Dr. Frank LuntzDo we need new laws for AI-generated political ads?
By Straight Arrow News
It’s the Wild West when it comes to regulating AI-generated political advertising. As new technology explodes, many are questioning whether we need more oversight of ads made with artificial intelligence. Right now, campaign ads don’t have to disclose if they were created or manipulated by AI, and some Democratic lawmakers are hoping to change that.
Straight Arrow News contributor John Fortier urges caution. He believes any new law that regulates AI-generated ads would “risk sweeping up long-established and legitimate uses of modified images and video.”
Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Congress will not just play dead against Trump, DOGE
Trump’s second term heralds changes for US social media
Musk-Ramaswamy DOGE initiative overdue and full of challenges
This is the dawn of a new national Republican coalition
Why are transitions of power so complicated in the United States?
Underreported stories from each side
ATF chief legal counsel fired by Bondi in latest Justice Department shakeup
8 sources | 13% from the left Getty ImagesWhite House restores 9/11 health program funding after uproar
10 sources | 10% from the right Getty ImagesLatest Stories
Trump returns to CPAC, criticizes Biden and comments on Ukraine deal
Trump’s Department of Justice removes database tracking police misconduct
DOE to investigate Maine Education Department over trans athletes, Title IX
New coronavirus discovered in bats similar to COVID-19
Pope Francis in critical condition with asthma-like respiratory crisis
Popular Opinions
In addition to the facts, we believe it’s vital to hear perspectives from all sides of the political spectrum.
Trump, Bondi crack down on sanctuary for criminal migrants
Yesterday Star ParkerHypocritical Elon Musk has all the power with zero accountability
Yesterday Dr. Rashad RicheyTrump’s baby-faced bulldozers just getting started
Thursday Matthew ContinettiLoss of USAID makes America and the world less safe
Thursday Jordan Reid