Thank you for attending re:MARS
“Are you a good person?” In a sterile hallway, Ava gives Caleb a test. The questions start off innocuously.
What’s your favorite color?
What’s your earliest memory?
Caleb struggles to reveal the content of his character, looking askance and letting out an expletive. Ava asks again: “Are you a good person?”
Caleb’s answer doesn’t matter. On the other hand, whether Ava is a good person is, in a sense, the central question posed by the 2015 sci-fi thriller Ex Machina. In the movie, Ava is a humanoid robot built with advanced artificial intelligence; it is Caleb’s job to conduct a Turing test on Ava to see if her behavior is indistinguishable from a human’s.
To the viewer, though, something looks off about Ava: she cocks her head at too extreme an angle when asking questions, and her eyes sweep abruptly across her field of vision as she’s speaking. Director and screenwriter Alex Garland engineered these details by carefully combining Alicia Vikander’s performance with animation. He justified the decision in an interview with RogertEbert.com: “If Ava were entirely computerized, her movements likely would have had an elasticity to them that wouldn’t have been as convincing,” he said. “Alicia’s performance was meant to mimic the uncanny valley.”
The uncanny valley is a term used to describe the unnerving feeling you get when observing something almost human but not quite and the sense of distrust that follows. For engineers, designers, and ethicists creating robots to aid tomorrow’s humans, escaping the uncanny valley has proved a treacherous climb. Yet, it’s a crucial one if they want to create machines that humankind can trust. Supported by the long, iterative history of robotics research and design, creators are not free-climbing out of this valley alone.
1.0 History
Western history and religion have long been fascinated with the barrier between what is human and what is only human-like, says Rob Wortham, a senior lecturer in the Department of Electronic and Electrical Engineering at the University of Bath. Take early Judaism’s conception of the Golem, for example—an anthropomorphic, animated being created out of inanimate material. Long before the dawn of electronics, Golem folklore described a manmade being with a mind of its own, built to protect but unpredictable.
This form of fascination isn’t always a positive for those who design humanoid robots for a living, in part because of the uncanny valley. Perhaps unusually for a phenomenon spanning entire civilizations’ experiences, the feeling of unease wasn’t given a term until the 20th century. Initially, the Japanese roboticist Masahiro Mori described what would come to be known as the uncanny valley using a graph made up of five points. Lined up to be increasing in human likeness, the five examples Mori gives are an industrial robot, a toy robot, a prosthetic hand, a lifelike puppet, and a healthy person. The vertical axis of the graph charts our affinity to these creations: he theorizes that we identify the most with a healthy person, followed by the lifelike puppet, toy robot, and industrial robot. But what of the prosthetic hand? We display a negative affinity toward it, Mori says—in essence, it gives us the creeps, even though it’s arguably more “person-like” than a toy robot. It’s the gap between expectation and perception, and specifically the precipitous drop and rise nearing human facsimile, that we know as the uncanny valley.
Since Mori postulated his theory in 1970, computer scientists, neuroscientists, and psychologists have largely supported his hypothesis. Even more striking, they have measured the uncanny valley effect at work in moviegoers’ negative reactions to certain semi-realistic animated films. Such films have garnered negative reviews because of characters “who look so real that it’s creepy.”
It’s not surprising, then, the influence that the uncanny valley effect has on a person’s desire to trust a humanoid robot. Neither is this a problem safely confined within the walls of academia, as more of these robots are being made now than ever before. The Association for Advancing Automation predicts that the market for humanoid robots will be worth $3.9 billion in 2023, representing an annual growth rate of over 50 percent. Importantly, these robots’ roles can be split into functions that are asocial, such as disaster response at power plants, and social, such as providing companionship and care to elderly patients.
There are technical reasons why the humanoid robot boom seems to be happening now: better, more sensitive materials with which to construct robotic arms and faces, coupled with breakthroughs in natural language processing; speech, voice, and text recognition; and increased computing power for real-time machine learning. But, just as importantly, according to UMass Lowell computer science professor Holly Yanco, there have been changes in humans’ perception of robots. She says that we have spent more time around them and are more comfortable with their increased presence in our everyday lives.
“Twenty years ago, people had very little experience with robots,” she says. Now, she added, some people have experience with robotic vacuum cleaners and social robots like SoftBank’s Pepper, but most humans’ experiences with higher-processing social robots are relegated to watching videos of them online or on TV. So, even though we may be ready to accept robots into our routine, whether or not we ultimately accept them depends on these first few years of introduction.
Detangling the uncanny valley effect shows how poor design can lead to mistrust, but understanding human psychology reveals a dangerous flip side that robotic designers must also balance: overtrust.
1.1 The flip side of trust
“Well, I think the robot is broken again. Please go into that room, and follow the instructions. I’m sorry about that.”
Participants in a 2015 Georgia Tech study heard researchers say this as they passed an emergency response robot on their way into an examination room. They saw a sign instructing them to complete a survey on their perception of robots, close the door, and read an article from the IEEE Spectrum magazine. Unbeknownst to the participants, closing the meeting room door prompted an automatic timer to start ticking down. At the end of three minutes, artificial smoke filled the corridor outside the office room. Two smoke detectors were activated, and participants heard a buzzing noise and the words “Evacuate! Smoke! Evacuate!”
Researchers had simulated an emergency, and participants were tasked with finding the exit of the main building. As they walked back into the room with the robot, it sprang to life and pointed one way toward an exit. Despite what the participants had been told about the robot’s functionality, they let it direct them—even when it led them over couches and through a different door than the one through which they had entered minutes before.
“The results above are promising from one perspective: clearly the robot is considered to be trustworthy in an emergency,” wrote Paul Robinette, the author of the study. “Yet, it is concerning that participants are so willing to follow a robot in a potentially dangerous situation even when it has recently made mistakes.”
Building trust is a delicate dance. On the one hand, designers might think to do everything within their power to create the most trustworthy-looking robot. But trust needs to run more than skin-deep. The solution, according to Yanco and Wortham, is transparency.
1.2 Building transparency
Mike Honeck, Design and Innovation Principal Director for Accenture Interactive, thinks deeply about transparent design. Previously, Honeck worked as an Imagineer at Disney and led the design of a self-piloting droid that was tested in Disneyland’s Tomorrowland in 2017. The diminutive droid named J4-K3, or Jake for short, had to capture the hearts of visitors to the park while blending in enough to add to the feel of futuristic Tomorrowland.
Ensuring Jake deserved the trust of families started at the basic level of safety, Honeck says. His team designed the robot to be able to brake quickly and observe low-to-the-ground activity from any direction. They also avoided sharp edges in the robot’s design, opting for a round and sleek shape.
It’s not surprising the Imagineers needed the 2017 real-world test to refine their design of Jake, Yanco says.
“It’s not the same as interacting with the real world procedurally, if you’re interacting with people,” she says. “People are unpredictable, particularly children.”
On top of basic concerns, Honeck designed transparency through one of the oldest tricks in Disney’s books: the 12 principles of animation. In particular, he says the principle of anticipation provides a framework for a robot to use “body language” to help observers anticipate its future motion.
“If you want to turn and start walking to your right, if you were going to animate a scene where that’s what the action was, you’d start by turning their eyeballs, and then their head would start to follow that, and then their neck and shoulders, and then finally the mass of their body and they would take off and actually start walking,” Honeck says. “We can do that same thing in robotics.”
Even if turning the head of a robot before it moves in a certain direction serves no technical purpose, doing so could put people at ease and allow them to preempt its next moves, Honeck adds.
Other designers have devised different ways to signal intent in the robots they create. Wortham designed a “muttering” robot that speaks its actions out loud, while one of Yanco’s graduate students strapped a projector to a robot’s head to show participants upon returning to a room what the robot had done in their absence.
1.4 Looking forward
In April, Disney Imagineers unveiled Jake’s successor, a humanoid robot named Project Kiwi. In one staging, it was dressed as Groot, but the robot is a platform that could take the appearance of any popular franchise character. Matthew Panzarino, the editor-in-chief of TechCrunch, describes the platform as “the Holy Grail of themed entertainment.”
Project Kiwi will eventually be a free-roaming robot, but the demonstration shown to Panzarino was purely aesthetic. This is what is known as a Wizard of Oz methodology, so named because designers use a hidden human director to simulate abilities in a robot that are not yet programmable. Yanco says such methods are used in lab settings to gauge participants’ reactions to the technology of tomorrow.
Meanwhile, the best practices of transparent design are on track to be officially codified. Wortham is part of an Institute of Electrical and Electronics Engineers working group formed to standardize a set of ethically aligned design practices. He says that after four years, the group is “quite close” to publicly releasing these standards.
Working on trustworthy design reminds some, like Honeck, about the deeply human, and deeply beautiful nature of robots. Trust, though not a given, reminds him of why he was drawn to robotics in the first place.
“I think there’s something deeply human and compelling about these things that have a sense of personality and are different from you, but still you can find something relatable in them,” Honeck says. “A robot is a self-made mirror that humans are holding up to themselves. I think it’s poetic, and it’s valuable—it teaches us something about the human psyche.”