Robots and psychology
Mapping the uncanny valley
Why androids are scary
ARTIFICIALLY created beings, whether they be drawn or sculpted, are warmly accepted by viewers when they are distinctively inhuman. As their appearances are made more real, however, acceptance turns to discomfort until the point where the similarity is almost perfect, when comfort returns. This effect, called "the uncanny valley" because of the dip in acceptance between clearly inhuman and clearly human forms, is well known, particularly to animators, but why it happens is a mystery. Some suggest it is all about outward appearance, but a study just published in Cognition by Kurt Gray at the University of North Carolina and Daniel Wegner at Harvard argues that there can be something else involved as well: the apparent presence of a mind where it ought not to be.
无论是画出来的还是造出来的机器人，当它们与人类很不相像时，人类就能很好地接受它们。但是，随着它们的外貌越来越近似真人，这种接受就会转变为反感（ ——shadowing的译法：人会对这种机器人的态度从好感变为厌恶）。直到它们与人类的相似度达到一个几近完美的状态，人类又会重新接受它们。因为在不够拟人和非常拟人之间人们对机器人的接受度有一个下降的过程，所以这种效果被称为"恐怖谷"①。（——lovewj1989的调整译法：这个效应之所以被称作"恐怖谷理论"，是因为从机器人与人类外表差异极大到差异极小，人类对机器人的接受度曲线先下后上，形成一个低谷。）（因为在不够拟人（传统机器人）和（真人机器人）非常拟人之间，人类好感度突然下降，所以这 种效果被称为"恐怖谷"。）恐怖谷理论众所周知，尤其是对动画家来说，但它的出现原因却是个谜。一些人认为这一理论只和外表有关，但最近北卡罗来纳大学的Kurt Gray与哈佛大学的Daniel Wegner在《认知》上发表了一篇研究文章，称这其中可能还包含着别的东西，即本不该出现在机器人身上的明显的思维能力。
According to some philosophers the mind is made up of two parts, agency (the capacity to plan and do things) and experience (the capacity to feel and sense things). Both set people apart from robots, but Dr Gray and Dr Wegner speculated that experience in particular was playing a crucial role in generating the uncanny-valley effect. They theorised that adding human-like eyes and facial expressions to robots conveys emotion where viewers do not expect emotion to be present. The resulting clash of expectations, they thought, might be where the unease was coming from.
To test this idea, the researchers presented 45 participants recruited from subway stations and campus dining halls in Massachusetts with a questionnaire about the "Delta-Cray supercomputer". A third were told this machine was "like a normal computer but much more powerful". Another third heard it was capable of experience, by being told it could feel "hunger, fear and other emotions". The remainder were told it was capable of "self-control and the capacity to plan ahead", thus suggesting it had agency. Participants were asked to rate how unnerved they were by the supercomputer on a scale where one was "not at all" and five was "extremely".
Dr Gray and Dr Wegner found that those presented with the idea of a supercomputer that was much more powerful than other computers or was capable of planning ahead were not much unnerved. They gave it a score of 1.3 and 1.4 respectively. By contrast, those presented with the idea of a computer capable of experiencing emotions gave the machine an average of 3.4. These findings are consistent with the researchers' hypothesis. There seems to be something about finding emotion in a place where it is not expected that upsets people. This led Dr Gray and Dr Wegner to wonder if the reverse, discovering a lack of experience in a place where it was expected, might prove just as upsetting.
Gray博士和 Wegner博士发现，那些被告知这台电脑只是一台比其他电脑强大许多的超级计算机和它有提前制定计划的能力的问卷参与者都没有感到十分恐惧。他们给出的平均分分别是1.3和1.4分。相比之下，那些被告知这台电脑有体验情感能力的参与者打出的恐惧分数为3.4分。这一发现与研究者的假设相吻合。似乎是发现一个不该有情感的机器却有了情感这类事使人类感到不安。这使Gray博士和 Wegner博士不禁想问，如果将情况反过来，人类发现一个该有情感的人却没有体验情感的能力时会不会同样会感到不安。
To explore this, they presented a further 44 volunteers, recruited in the same manner as those in the earlier experiment, with a picture of a man. A third were told that he was normal; a third that he was unable to plan; and a third that he was unable to feel pain, pleasure or fear. As in the first experiment, participants rated how unnerved they were by the man on a five-point scale.
Those who were told the man was normal, or was incapable of planning, gave scores that averaged 1.8 and 1.9 respectively. Those told he could not feel pain, pleasure or fear were much more unnerved. They gave average scores of 3.0.
Dr Gray and Dr Wegner believe their findings argue that a big part of the uncanny-valley effect stems from expectations not being met. Robots are not expected to have feelings and when such feelings are found, it seems somehow wrong. Humans, by contrast, are expected to have feelings-and when such feelings are not found, the effect is equally frightening. Their conclusions will perhaps give pause to those who see the ultimate robot as something which physically resembles a human being.