Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Commentary
Our commentary partners will help you reach your own conclusions on complex topics.
With globalization ending, the US can and should adapt
Friday Peter ZeihanDeglobalization’s impact on world food exports
Thursday Peter Zeihan‘The lesser of two evils’: Undecided Gen Z on 2024 election
Wednesday Dr. Frank LuntzWill climate change be the death of wheat?
Wednesday Peter ZeihanDo we need new laws for AI-generated political ads?
By Straight Arrow News
It’s the Wild West when it comes to regulating AI-generated political advertising. As new technology explodes, many are questioning whether we need more oversight of ads made with artificial intelligence. Right now, campaign ads don’t have to disclose if they were created or manipulated by AI, and some Democratic lawmakers are hoping to change that.
Straight Arrow News contributor John Fortier urges caution. He believes any new law that regulates AI-generated ads would “risk sweeping up long-established and legitimate uses of modified images and video.”
Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Uncensored political content like Trump-Musk on X is a win for free speech
How do presidential debates work?
US elections have become much more secure since 2000
SCOTUS case on threat of disinformation raises thorny questions
Trump v. Anderson is more complicated than it looks
Underreported stories from each side
Elon Musk’s Pro-Trump PAC Ramps Up Advertising
7 sources | 14% from the left Getty ImagesSocial media threat against multiple schools forces closures in New Jersey
10 sources | 0% from the right Getty ImagesLatest Stories
Opposition presidential candidate flees Venezuela amid arrest threats, protests
Troops scheduled to leave Iraq by 2026, US shifts to advisory role
Defense secretary says temporary budget bill would severely impact military
House GOP releases report on Afghanistan withdrawal
Trump, Harris prepare for presidential debate with 1 day to go
Popular Opinions
In addition to the facts, we believe it’s vital to hear perspectives from all sides of the political spectrum.
Don’t blame Israel, and keep the pressure on Hamas
Friday Star ParkerThe reckoning of the Trump campaign has begun
Friday Dr. Rashad RicheyAs Trump goes lower, Harris aims higher
Thursday Jordan ReidThe government isn’t the answer to a stressed-out parenting culture
Thursday Timothy Carney