Uncanny Valley, or: Are Robots the New Humans?

In her column about the blessings and perils of robot technology this week on Netopia, Birgit Hütten talks about human-like robots. That brings to mind the idea of the “uncanny valley”, which also appears in digital simulation and artistic experiments. It is the phenomenon that a human-like artefact can fool the eye but not the mind. The moment when the eye believes it’s looking at a human, when at the same time the mind senses that something is wrong—that is the uncanny moment, standing at the edge of the “uncanny valley”—the v viewer gets a bad feeling, almost like a bodily reaction. For this reason, robot design moves away from human-like appearances and CGI-artists understand that photo-realistic visualisation can not make the leap across the uncanny valley. Some video game theorists compare this with the moment in art history circa 1850 where photo-realism was (almost) achieved and rather than increased perfection, new styles developed: impressionism, surrealism, cubism, etc. Of course, the invention of photography must have also influenced the process. In a similar way, video game design is going in novel directions thanks to the uncanny valley: the spherical Mii-characters of Nintendo’s Wii, the blocks of Minecraft, the retro-design flavour of many app games. At the same time, uncanny valley has an impact on art (in the traditional meaning of the world), because the same reflex that robot and game designers want to avoid, artists can explore. Thus are for example, Tove Kjellmark’s experiments with plush toys minus the fur one expression of uncanny: the moment our mind realises that the moving mass of objects we see is actually a swarm of skinned toy pets, that is the uncanny moment.

Murray Shanahan is professor of Cognitive Robotics at Imperial College in London. He has taken part in two Netopia panels (in Brussels and Paris) and was interviewed for the Netopia-report on digital ethics. I spoke to him about artificial intelligence and self-awareness—you know, the famous moment in The Terminator when computer network “Skynet” becomes self-aware. Subsequently, Skynet starts a war on humans; in fact, this is a common theme in Sci-Fi: sentient machines killing people to protect them from themselves (see also Asimov’s I, Robot for example). Moore’s law predicts that by circa 2045 computers will have processing power equal to the human brain and as development continues some decades later, super-AI’s may have the power of all brains on the planet. This would spell a disastrous future for mankind if the sci-fi dystopias are to be believed (and Moore’s law’s predictions are right), so I asked Shanahan about it. His answer is that there is nothing that suggests that self-awareness comes as a consequence of increased processing power. There is no critical mass of transistors that will result in the chip waking up to its own existence. Rather, self-awareness is a consequence of other factors, such as having senses and limbs that are able to interact with the outside world; perhaps relationships with others is also a prerequisite. So, no Skynet will not become self-aware, at least not if Professor Shanahan is right. But what if there is a technology with senses and limbs? Would that qualify? Sounds to me like the robots Birgit Hütten writes about. That would be a whole new level of uncanny.