Another AI deep fake has been weaponized to our detriment. A voting advocacy group is suing the political consultant behind an AI-generated robocall that went out back in January. The call impersonated Joe Biden and told New Hampshire voters not to participate in the state’s presidential primary. According to The Washington Post, well, this robocall is the first known major use of AI to interfere with a presidential election.
Now, who knows how many voters were duped by this call, or other forms of AI-generated deception. The reality is that artificial intelligence is getting better, and it also has many benefits. That being said, AI also brings with it many detriments to society and to it as a whole. So it makes sense that not everyone wants to be a part of this upcoming AI revolution. For that reason, tech companies should need explicit authorization from users before they can sell any of our content to train AI models.
Indeed, Microsoft, Reddit, Amazon, Google [sic] Alphabet and other companies are seeking to license, share or sell user-generated content with third parties for AI training models. Said another way, tech giants want to sell what we created on their platforms to make artificial intelligence more advanced.
You don’t have to be Sarah Connor to sense that something is wrong here. I never created content on social media with the intention or express knowledge of it being used to make the T 800. self aware. Yet now our online content will be used to that hyperbolic end. And the end result will be making products that will advance capitalism, yes, but will also wreak havoc. Meanwhile, corporate America will collect a check and really just keep it moving as they lay off 1000s and 1000s of workers. That does not sit well with me. I realized that AI is the future and it has many benefits. Generative AI has the power to create content and to synthesize ideas. Essentially, it has the kind of knowledge work that millions of people now do behind computers, which is why Google Duolingo ups and other major players are replacing white collar workers with AI. According to the Oliver Wyman study, more than half of senior White Collar managers surveyed said that they thought their jobs could be automated by generative AI compared with some 43% of middle managers and 38% of first line managers. AI is taking jobs. It’s also taking away the democratization of knowledge. Before generative AI people would play a role in knowledge building, they would really engage, share collectively build and create and then articulate that information. Replacing this process with machines will change human interactions on a larger level, and also curb the knowledge base of the community. For example, if Wikipedia is run entirely by Ai, ai decides what’s uplifted rather than members of the community, effectively determining what is centered what becomes the focal point of our conversations, and the knowledge banks of humans. Moreover, AI will ultimately hamper human innovation, because by replacing human knowledge, it won’t have new sources from which to learn. Remember, AI does not innovate rather, it reproduces based on what it learns from humans. So what happens when innovation is necessary, but there are no humans in that role, because maybe they’ve been laid off or replaced by AI. AI won’t know the answer, and it won’t have the knowledge bank from humans from which to learn. Why? Because there are no humans there to get the answers. It’s kind of be a hot mess. Every person who’s created content that is used to advance AI should have a say in the matter. Do you want to be a part of something that is likely going to cause society significant problems? Microsoft Amazon alphabet shouldn’t make that decision for us. But they’re already making plans to do so. In late March, well read it said that it was exploring data licensing opportunities, where it believes it would not conflict with its values or the rights of its users. But I expect every company to say that no tech giants gonna come out and admit that they’re looking to turn a buck on user generated content and violation of user agreements. But really, though, these companies have plenty of incentive to do so. user agreements often bar class action lawsuits, which take years to resolve if they even are viable. And while government fines can run in the billions seldom is that ever the case. These data licensing opportunities can mean big bucks for tech giants making yet possibly more worthwhile to trample user rights in the process. In fact, we’ve not even know about these data licensing efforts made by tech companies, but for corporate security filings that caught the attention of the Federal Trade Commission, these federal regulators now they’re not just ordering the disclosure of documents, but they’re also asking to meet with these tech companies to learn more about their efforts to license user data to train AI models. The FTC is out here doing its job of protecting America’s consumers. It’s even proposing a rule that bans use of AI tools to impersonate individuals like the robocall made to New Hampshire voters. But the FTC cannot do it all. Even with the best of public servants, we cannot guarantee that our interests as users will be protected. What we can do, however, is use our collective voices and demand that we be afforded individual agency in deciding whether our content should be used to contribute to the future of AI. Because we deserve dignity. After all, we’re human beings, not computers or machines.
Commentary
Our commentary partners will help you reach your own conclusions on complex topics.
‘Divided we fall’: Americans discuss concerns for democracy
Thursday Dr. Frank Luntz‘That was great’: Undecided voters react to Walz-Vance debate
Oct 4 Dr. Frank Luntz‘A bipartisan problem’: Americans debate immigration policy
Sep 30 Dr. Frank LuntzHurricane Helene hits US coast, Appalachia and beyond
Sep 27 Peter ZeihanUsers must have rights in Big Tech’s AI race
By Straight Arrow News
Big Tech companies across the board, including Meta, X, Amazon, Microsoft, and Alphabet are racing to draft policies that enable them to use user-generated content to train advanced generative AI models without user permission or compensation. The Federal Trade Commission is investigating where it can, but government regulations lag far behind the rapid pace of technological progress in the emerging AI industry as a whole.
Straight Arrow News contributor Adrienne Lawrence says that users should demand rights and the freedom to decide whether or not their own content will be used to develop AI programs that they might oppose.
Another AI deepfake has been weaponized to our detriment. A voting advocacy group is suing the political consultant behind an AI-generated robocall that went out back in January. The call impersonated Joe Biden and told New Hampshire voters not to participate in the state’s presidential primary. According to The Washington Post, well, this robocall is the first known major use of AI to interfere with a presidential election.
Now, who knows how many voters were duped by this call, or other forms of AI-generated deception. The reality is that artificial intelligence is getting better, and it also has many benefits. That being said, AI also brings with it many detriments to society and to it as a whole. So it makes sense that not everyone wants to be a part of this upcoming AI revolution.
For that reason, tech companies should need explicit authorization from users before they can sell any of our content to train AI models.
Indeed, Microsoft, Reddit, Amazon, Google’s owned [sic] Alphabet and other companies are seeking to license, share or sell user-generated content with third parties for AI training models. Said another way, tech giants want to sell what we created on their platforms to make artificial intelligence more advanced.
Another AI deep fake has been weaponized to our detriment. A voting advocacy group is suing the political consultant behind an AI-generated robocall that went out back in January. The call impersonated Joe Biden and told New Hampshire voters not to participate in the state’s presidential primary. According to The Washington Post, well, this robocall is the first known major use of AI to interfere with a presidential election.
Now, who knows how many voters were duped by this call, or other forms of AI-generated deception. The reality is that artificial intelligence is getting better, and it also has many benefits. That being said, AI also brings with it many detriments to society and to it as a whole. So it makes sense that not everyone wants to be a part of this upcoming AI revolution. For that reason, tech companies should need explicit authorization from users before they can sell any of our content to train AI models.
Indeed, Microsoft, Reddit, Amazon, Google [sic] Alphabet and other companies are seeking to license, share or sell user-generated content with third parties for AI training models. Said another way, tech giants want to sell what we created on their platforms to make artificial intelligence more advanced.
You don’t have to be Sarah Connor to sense that something is wrong here. I never created content on social media with the intention or express knowledge of it being used to make the T 800. self aware. Yet now our online content will be used to that hyperbolic end. And the end result will be making products that will advance capitalism, yes, but will also wreak havoc. Meanwhile, corporate America will collect a check and really just keep it moving as they lay off 1000s and 1000s of workers. That does not sit well with me. I realized that AI is the future and it has many benefits. Generative AI has the power to create content and to synthesize ideas. Essentially, it has the kind of knowledge work that millions of people now do behind computers, which is why Google Duolingo ups and other major players are replacing white collar workers with AI. According to the Oliver Wyman study, more than half of senior White Collar managers surveyed said that they thought their jobs could be automated by generative AI compared with some 43% of middle managers and 38% of first line managers. AI is taking jobs. It’s also taking away the democratization of knowledge. Before generative AI people would play a role in knowledge building, they would really engage, share collectively build and create and then articulate that information. Replacing this process with machines will change human interactions on a larger level, and also curb the knowledge base of the community. For example, if Wikipedia is run entirely by Ai, ai decides what’s uplifted rather than members of the community, effectively determining what is centered what becomes the focal point of our conversations, and the knowledge banks of humans. Moreover, AI will ultimately hamper human innovation, because by replacing human knowledge, it won’t have new sources from which to learn. Remember, AI does not innovate rather, it reproduces based on what it learns from humans. So what happens when innovation is necessary, but there are no humans in that role, because maybe they’ve been laid off or replaced by AI. AI won’t know the answer, and it won’t have the knowledge bank from humans from which to learn. Why? Because there are no humans there to get the answers. It’s kind of be a hot mess. Every person who’s created content that is used to advance AI should have a say in the matter. Do you want to be a part of something that is likely going to cause society significant problems? Microsoft Amazon alphabet shouldn’t make that decision for us. But they’re already making plans to do so. In late March, well read it said that it was exploring data licensing opportunities, where it believes it would not conflict with its values or the rights of its users. But I expect every company to say that no tech giants gonna come out and admit that they’re looking to turn a buck on user generated content and violation of user agreements. But really, though, these companies have plenty of incentive to do so. user agreements often bar class action lawsuits, which take years to resolve if they even are viable. And while government fines can run in the billions seldom is that ever the case. These data licensing opportunities can mean big bucks for tech giants making yet possibly more worthwhile to trample user rights in the process. In fact, we’ve not even know about these data licensing efforts made by tech companies, but for corporate security filings that caught the attention of the Federal Trade Commission, these federal regulators now they’re not just ordering the disclosure of documents, but they’re also asking to meet with these tech companies to learn more about their efforts to license user data to train AI models. The FTC is out here doing its job of protecting America’s consumers. It’s even proposing a rule that bans use of AI tools to impersonate individuals like the robocall made to New Hampshire voters. But the FTC cannot do it all. Even with the best of public servants, we cannot guarantee that our interests as users will be protected. What we can do, however, is use our collective voices and demand that we be afforded individual agency in deciding whether our content should be used to contribute to the future of AI. Because we deserve dignity. After all, we’re human beings, not computers or machines.
Hold Impact Plastics accountable for employee hurricane deaths
For Left to preserve democracy it can no longer play defense
Punishing children for failure to report threats will backfire
Citizens shouldn’t need AI to tell them how to vote
Trump’s disastrous economic plan will add trillions to national debt
Underreported stories from each side
India, US sign massive $4B pact for 31 Predator drones
34 sources | 20% from the leftGeorgia officials expecting ‘record-breaker’ for first day of early voting
29 sources | 6% from the rightLatest Stories
Majority of Americans worried over social media censorship ahead of election
Caban pardoned NYPD officers, top doctor leaving Adams admin: Report
US fines Lufthansa airlines $4 million over accusations of antisemitism
Ukraine turns to creative ads to boost military recruiting
Google invests in SMRs as more tech companies go nuclear for energy needs
Popular Opinions
In addition to the facts, we believe it’s vital to hear perspectives from all sides of the political spectrum.
A year after Oct. 7, US still not doing enough to support Israel
8 hrs ago Ben WeingartenLike Mexico, America should elect a female president
9 hrs ago Ruben NavarretteWhy Republicans won’t admit who won 2020 election
Yesterday David PakmanAmericans must stand with Israel
Friday Star Parker