Skip to main content
Business

ChatGPT launched an AI revolution. Here’s where we stand nearly 1 year on.


Artificial intelligence hit the mainstream like a firestorm following the release of OpenAI’s ChatGPT. Technology companies scrambled to join the AI arms race, led by Microsoft’s $10 billion investment in OpenAI. At the same time, Capitol Hill sprang into action, holding hearing after hearing over safety and regulation.

The overnight sensation of generative AI is not likely to burn out as quickly as it came on. The endless possibilities are expected to transform technology, the workforce and society at large. At this pivotal juncture, humans will shape where artificial intelligence goes from here, but many fear the direction it will take.

AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.

Przegalińska is a senior research associate at Harvard University analyzing AI, robots and the future of work. She has a doctorate in the philosophy of artificial intelligence from the University of Warsaw and is an associate professor at Kozminski University.


The AI Revolution

Interest in artificial intelligence exploded when ChatGPT first hit the masses in November 2022. While AI has technically been around for decades, the sheer accessibility of directly interacting with a chatbot led to a surge in chatter, as evidenced by Google search trend data.


But it wasn’t just talk. Companies were quick to put money on the table. Nothing comes close to Microsoft’s $10 billion OpenAI investment, but tech companies, health care firms and venture capitalists were quick to ink their own deals in the first quarter of 2023. Microsoft’s move also triggered an AI search-engine race, pushing Google to release Bard, its experimental AI-powered search tool.


The Fear Factor

As humans reckon with the future of artificial intelligence capabilities, Aleksandra Przegalińska, a doctorate in the philosophy of AI, says the most prevalent emotion is fear.

It is mostly a story that is infused with fear, with a sense of threat; where artificial intelligence can reach a level where it figures out that it’s also as smart as we are, perhaps even smarter, and then becomes our enemy. And I think it’s in many ways a story about our history.

Aleksandra Przegalińska, AI expert

Przegalińska said many factors play into this fear, from movies like “The Terminator” to fear spread by AI developers themselves.

This past spring, AI leaders and public figures attached their names to the following statement. Key names that signed on include OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Google DeepMind CEO Demis Hassabis and Bill Gates.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety

“Sam Altman is obviously telling the Congress that we should all be scared but then again, he’s incubating GPT-5 as we speak,” Przegalińska said. “This to me seems a bit strange. Either you say, ‘Okay, there is a chance that this technology will be misused and this is the way I would like to address these concerns,’ or you’re saying, ‘Well, it’s ultimately the worst thing that can happen to humanity and I just simply stop building it at all.'”

I think maybe he has some regrets being left with Twitter instead of surfing this big AI wave.

Aleksandra Przegalińska on Elon Musk advocating for an AI ‘pause,’ citing risks to society. Musk was an early investor in OpenAI.

The history of civilization may explain the innate fear of AI. Dive into the philosophy of it here.


Replaced by AI?

Perhaps the biggest fear of AI is the possibility that it could replace so many livelihoods. In March, investment bank Goldman Sachs predicted that AI could automate the equivalent of 300 million full-time jobs between the U.S. and Europe.

Przegalińska, whose research at Harvard University focuses on AI and the future of work, says developers should focus on how humans can collaborate with AI to increase productivity but not replace humans altogether.

Many things can go wrong if you decide to choose that pathway of full automation.

Aleksandra Przegalińska, AI expert

“But our jobs will change and some jobs will probably disappear because of artificial intelligence,” Przegalińska said. “And I do think that politicians have to look at that as well.”

In May 2023, AI was responsible for 3,900 job cuts in the U.S., according to data from Challenger, Gray & Christmas, Inc.

Is the future a work-optional utopia where AI does all the work? Learn more.


Regulating AI

When it comes to regulating AI, the U.S. is not the one setting the global groundwork. This summer, the European Union passed a draft law known as the A.I. Act, legislation that is years in the making. But it’s just a start.

“I do regret a bit that this regulation happened this late,” Przegalińska said. “Many people from the AI field have been calling for regulation before ChatGPT and way before ChatGPT. We knew already that there would be some problems because some of these systems are just not explainable. They’re like black boxes; they’re very difficult to understand and yet we use them.”

Meanwhile, lawmakers on Capitol Hill have held several hearings about risks posed by artificial intelligence and ways to regulate its use. However, American efforts are considered to be in the early stages. Also, lawmakers have been criticized in the past for not understanding the technology they aim to regulate, like during Big Tech hearings in the past.

“There was a bit of a mismatch in terms of digital competencies,” Przegalińska said.

I do hope that this time around, the politicians will come prepared, that they will be better prepared for these types of discussions.”

Aleksandra Przegalińska, AI expert

How should AI be regulated to combat deepfakes and bad actors? Click here for more.


The Uncanny Valley

How easy is it to tell what is real and what is artificial? AI today has some serious quirks, like generating eight or nine fingers on one hand. But as technology advances, it’ll get more and more difficult to separate fact from fiction.

I have my own deepfake, and it’s so good that for me, it’s even sometimes hard to figure out whether it’s her speaking or myself. Really, that’s so uncanny.

Aleksandra Przegalińska, AI expert

In real life and in movies, those in robotics have pursued building robots that look and act human, coming close to crossing the uncanny valley.

“The uncanny valley is this concept that tells us that if something resembles a human being, but not fully, then we are scared of it,” Przegalińska said. “So our probably very deeply, inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with.”

What are the psychological effects of crossing into the uncanny valley? Click here to watch.

Full interview time stamps:

0:00-2:22 Introduction
2:23-5:00 My Unconventional Path To AI Research
5:01-9:42 How The Terminator, Media Drive Our AI Fears
9:43-13:01 Sam Altman, AI Developers Spreading Fear
13:02-14:00 Elon Musk’s Big Regret?
14:01-18:55 How ChatGPT Changed Everything
18:56-25:01 Do Politicians Know Enough About AI To Regulate?
25:02-31:48 The Dangers Of The Uncanny Valley, Deepfakes
31:49-39:27 Will AI Cause Massive Unemployment?
39:28-43:49 Answering Most-Searched Questions About AI

Tags: , , , , , , , , , , , , , , ,