Skip to main content
U.S.

Why we fear AI, from a PhD in philosophy of artificial intelligence

Share

Conversations around artificial intelligence are often filled with fear and threat. Much of it can be traced back to movies, news stories, and even comments by those developing the technology, according to a Harvard senior research associate with a Ph.D. in the philosophy of artificial intelligence.

“Instead of focusing on things that are to be solved and some challenges ahead, we are just clearly falling into that Terminator narrative,” Aleksandra Przegalińska said.

Przegalińska said experts in the field are among those most afraid of the technology. Many have signed on to statements warning of the risks associated with AI but continue developing it.

AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness. Watch the full conversation here.

Below is an excerpt from that interview.

Simone Del Rosario: What are some of the emotions that we struggle with when we’re reckoning with the AI development going on?

Aleksandra Przegalińska: I think the most prevalent emotion is probably fear. When you think about it, the way the story of artificial intelligence is being told to us by pop culture, by media, by movies that we like, like ‘The Terminator,’ it is mostly really a story that is infused with fear, with a sense of threat, where artificial intelligence can reach a level where it figures out that it’s also as smart as we are, or perhaps even smarter, and then becomes our enemy.

And I think it’s in many ways a story about our history, how we’ve struggled, and how there were so many conflicts and revolutions as we were moving forward as a civilization. And I do think that we put many of these fears into AI because this is something we know.

On the other hand, we are absolutely intrigued, right? It’s an intriguing field. We don’t have any other technology that is so close to us that can also communicate with us, have a perception of reality, see something, hear something, respond, make inferences, reason. So I think in that way, we are challenged by it, but we are also very interested and intrigued by it and how it’s going to evolve in the future is a very intriguing question here.

Brent Jabbour: I have an interesting question about that, too, because you talk about the fear and the concerns about Terminator and Skynet. Do you think us, as a society, we do a disservice because every time something interesting in AI comes up, we immediately go to the bad rather than the possible good?

Aleksandra Przegalińska: Well, yes, I absolutely agree with that. So I’m definitely on the, I hope, rational side here. So very often, I would just say, ‘Hey, let’s not panic. It’s just technology. AI is a statistical model. It’s very good at what it does, and it can be very helpful to us, but nonetheless, it’s just a tool.’ But there are many other experts also, you know, people who are so prominent in the field who are clearly afraid of this technology. The current discourse around artificial intelligence, including generative AI, something we will probably talk about, is absolutely full of panic, which is unnecessary and perhaps it is a bit of a disservice because instead of focusing on things that are to be solved and some challenges ahead, we are just clearly falling into that Terminator narrative immediately, right. And that does not help us in rational thinking and planning, strategizing around this technology. So that I think is a problem.

Tags: , , , , , , , , , ,

Simone Del Rosario: What would you say are some of the emotions that we struggle with when we’re reckoning with the AI development going on?

 

Aleksandra Przegalinska: Well, I think the most prevalent emotion is probably fear. When you think about it, the way the story of artificial intelligence is being told to us by pop culture, by media, by movies that we like, like The Terminator, it is mostly really a story that is infused with fear, with a sense of threat, where artificial intelligence can reach a level, it figures out that it’s also as smart as we are, or perhaps even smarter, and then becomes our enemy. And I think it’s in many ways a story about our history, how we’ve struggled and how there were so many conflicts and revolutions as we were moving forward as a civilization. And I do think that we kind of put many of these fears into AI because this is something we know. On the other hand, we are absolutely intrigued, right. It’s an intriguing field. We don’t have any other technology that is so close to us that can also communicate with us, have a perception of reality, see something, hear something, respond, make inferences, you know, reason also. So I think in that way, we are challenged by it, but we are also very interested and intrigued by it and how it’s going to evolve in the future is a very intriguing question here.

 

Brent Jabbour: I have an interesting question about that, too, because you talk about the fear and the concerns about Terminator and Skynet. Do you think us, as a society, we do a disservice because every time something interesting in AI comes up, we immediately go to the bad rather than the possible good?

 

Aleksandra Przegalinska: Well, yes, I absolutely agree with that. So I’m definitely on the, I hope, rational side here. So very often I would just say, hey, let’s not panic. It’s just technology. AI is a statistical model. It’s very good at what it does and it can be very helpful to us, but nonetheless, it’s just a tool. But there are many other experts also, you know, people who are so prominent in the field who are clearly afraid of this technology. The current discourse around artificial intelligence, including generative AI, something we will probably talk about is absolutely full of panic, which is unnecessary and perhaps it is a bit of a disservice because instead of focusing on things that are to be solved and some challenges ahead, we are just clearly falling into that Terminator narrative immediately, right. And that does not help us in, you know, kind of rational thinking and kind of planning, strategizing around this technology. So that I think is a problem.