Skip to main content
Tech

Is it alive? How AI’s uncanny valley could threaten human interaction

Share

The uncanny valley as a concept has been around for decades. But as artificial intelligence develops, technology is several steps closer to tricking human brains and manipulating emotions.

The term uncanny valley is used to describe the emotional response from humans when encountering robots that appear human-like. AI expert and philosopher Aleksandra Przegalińska said humans have a common biological response to this interaction: an eerie sensation.

We do see avatars that look almost exactly like humans, where that immediate response of your body is just acceptance… But then there’s suddenly a glitch.

Aleksandra Przegalińksa, AI senior research associate, Harvard University

“In the era of deepfakes and also in the context of the fact that we are mostly interacting with the digital world, not necessarily with physical robots, this uncanny valley idea is very, very problematic,” Przegalińska said.

In the video above, she details how encounters with human lookalikes could make people afraid of actual human interaction.

An AI-generated image of two avatars that have human-like features.
Adobe Firefly / Straight Arrow News / AI-Generated Image

AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness. Watch the full conversation here.

Below is an excerpt from that interview.

Simone Del Rosario: I was hoping that you could explain for me this concept of the uncanny valley. I’ve heard you talk on it before and I just thought it was a really fascinating look at where people should be designing AI versus where they should be steering away from.

Aleksandra Przegalińska: This is a concept that my team and I have been researching for the past couple of years. It was mainly focused on building robots and how not to build them.

The uncanny valley is this concept that tells us that if something resembles a human being but not fully, then we are scared of it. So our probably very deeply, inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with.

I’m not sure if you’re familiar with a robot called Sophia. It’s very, very popular on social media and it gives you that sensation or effect of the uncanny valley — just sort of very confusing to figure out whether you’re really talking to something that is alive or not. Is it healthy or is it sick? What’s going on with it? Why is the mimic so weird? Why are the eyes rolling so slowly?

So it does resemble a human, but then again, it’s not a human. And that is interesting because now in the era of deepfakes and also in the context of the fact that we are mostly interacting with the digital world, not necessarily with physical robots, this uncanny valley idea is very, very problematic.

We do see avatars that look almost exactly like humans, where that immediate response of your body is just acceptance. You’re seeing something that looks like a human and it talks and it’s all good. But then there’s suddenly a glitch and that glitch is that moment when you realize that this may not be a human.

Then who knows? Maybe in the future, when there will be more deepfakes, we will become very cautious and afraid of interactions with others because it will be very hard to classify who is it that we’re dealing with.

Tags: , , , , , , , , , ,

Simone Del Rosario: I was hoping that you could explain for me this concept of the uncanny valley. I’ve heard you talk on it before and I just thought it was a really fascinating look at where people should be designing AI versus where they should be steering away from. Can you talk to me about that?

 

Aleksandra Przegalinska: Yes, sure. This is a concept that my team and I have been researching for the past couple of years. And it was mainly focused on building robots and how not to build them, really. So the uncanny valley is this concept that tells us that if something resembles a human being, but not fully, then we are scared of it, right. So our probably very deeply inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with. So this is something that we’ve known, I’m not sure if you’re familiar with a robot called Sophia. It’s very, very popular on social media and it gives you that sensation or effect of the uncanny valley, that is just sort of very confusing to figure out whether you’re really talking to something that is alive or not. And is it healthy or is it sick? What’s going on with it? Why is the mimic so weird, right? So why are the eyes rolling so slowly? So it does resemble a human, but then again, it’s not a human. And that is interesting because now in the era of deep fakes and also in the context of, you know, the fact that we are mostly interacting with the digital world, not necessarily with physical robots, this uncanny valley idea is very, very, I would say, problematic. And we do see avatars that look almost exactly like humans where that immediate response of your body is just like acceptance, right? You’re seeing something that looks like a human and it talks and it’s all good. But then there’s suddenly a glitch and that glitch is that moment when you realize that this may not be a human. Then who knows? Maybe also in the future where there will be more of such deepfakes, we will become very cautious right and afraid of interactions with others because it will be very hard to classify who is it that we’re dealing with.