Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Commentary
Our commentary partners will help you reach your own conclusions on complex topics.
‘Instill optimism’: Americans on how future generations can succeed
Friday Dr. Frank Luntz‘Have a little compassion’: Americans talk high holiday prices, anxiety
Dec 11 Dr. Frank Luntz‘System is rigged’: Black Americans on the American Dream
Nov 27 Dr. Frank Luntz‘Extremist’ or ‘phony’: Americans share who they voted for and why
Nov 21 Dr. Frank LuntzDo we need new laws for AI-generated political ads?
By Straight Arrow News
It’s the Wild West when it comes to regulating AI-generated political advertising. As new technology explodes, many are questioning whether we need more oversight of ads made with artificial intelligence. Right now, campaign ads don’t have to disclose if they were created or manipulated by AI, and some Democratic lawmakers are hoping to change that.
Straight Arrow News contributor John Fortier urges caution. He believes any new law that regulates AI-generated ads would “risk sweeping up long-established and legitimate uses of modified images and video.”
Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Is artificial intelligence dangerous to our political culture? And what if anything should we do to regulate political ads that present a false picture of reality? With the increased public availability of artificial intelligence tools, every aspect of life is under examination.
In the political world, many worry that AI will contribute to our already fractured and polarized politics, spreading disinformation and stirring up resentments with damaging political narratives.
Some have pointed to a recent ad run by the Republican National Committee that employed AI to generate images to illustrate the dangers of a re-election of President Biden and Vice President Harris.
The ad asked the question, “What if the weakest president we’ve ever had were reelected?” — and then proceeded to sketch out potential crises in a Biden second term: a Chinese invasion of Taiwan, failure of U.S. banks, accelerated border crossings and out of control crime.
What was unique about this ad, is that the images of these hypothetical crises were generated by AI. And to their credit, the RNC had a printed disclosure in the ad that the video was “built entirely with AI imagery.”
This ad has been cited by some lawmakers who have proposed legislation that would require that all use of AI be disclosed in political ads. The lawmakers cite the ad to highlight the dangers of AI, but also praise the ad makers for their written disclosure that AI had been employed.
Why shouldn’t every ad that employs AI be required to include such a disclosure? While the dangers of manipulated video and the creation of false reality are concerns, there are four reasons why we should proceed with caution in seeking such regulation.
First, what is AI? And is it really AI that is the issue? While AI can be employed to create and manipulate images and video, the ability to create lifelike generated images has existed long before our recent interest in AI. AI itself may have good or bad uses. AI might help campaigns reach new audiences, manage their campaigns more effectively, optimize advertising spending. But a focus on regulating AI is far too broad an aim. And Congress would struggle to put forth a clear definition of what constitutes AI.
Second, technology changes quickly, and any law in this area would have a hard time keeping up. Any law in this area would likely be outdated by the time the next election cycle came around.
Third, even if the focus is on AI generation of video, any law would risk sweeping up long established and legitimate uses of modified images and video. Take for example, the idea of morphing the face of one person into another. TIME magazine published an image of President Trump morphed into Vladimir Putin on its cover. And for many years, campaign ads have tried to tie a candidate to another less popular figure with video changing one person into another. And what about satire or cartoons?
Fourth, while there are separate challenges with the idea of regulating private individuals from spreading manipulated images, the current regulation we have on political advertising already provides good protection. Currently, the campaigns, parties and groups that run ads are subject to disclosure and disclaimer regulations where they must state in the ad who paid for the ad, and disclose campaign spending to various institutions.
The current system already polices problematic ads. These requirements allow for robust criticism of the campaign if they use misleading video or messages. Government wisely stays out of judging the truth or falsehood of ads, but the disclosure requirements often lead to campaigns retracting ads or facing political backlash for their messages. While AI will only grow in its significance, we should not overreact, blame it for the ills of our political culture, and be cautious in regulating AI and political advertising.
Musk-Ramaswamy DOGE initiative overdue and full of challenges
This is the dawn of a new national Republican coalition
Why are transitions of power so complicated in the United States?
The 25th Amendment should remain above politics
Uncensored political content like Trump-Musk on X is a win for free speech
Underreported stories from each side
Scott Jennings: The biggest scandal in America is the cover-up of Biden’s condition
19 sources | 13% from the left Getty ImagesSinema reflects on criticism in exit interview: ‘Don’t give a s‑‑‑’
11 sources | 10% from the right Getty ImagesLatest Stories
Biden considers commuting death row: Report
Louisville police issue citation to homeless woman in labor
New book accuses Spotify of promoting fake artists
Andrew Cuomo sues for defamation after sexual harassment allegations
Driver kills at least two in Germany Christmas market attack
Popular Opinions
In addition to the facts, we believe it’s vital to hear perspectives from all sides of the political spectrum.
Time to say goodbye to DEI
Friday Star ParkerIt’s time to take failed capitalism out of Christmas
Friday Dr. Rashad RicheyMusk-Ramaswamy DOGE initiative overdue and full of challenges
Thursday John FortierTrump’s Mar-a-Lago interview is a preview of troubles ahead
Thursday Jordan Reid