Mind Perception in Humanoid Agents has Negative Effects on Cognitive Processing
When interacting with other entities, we make inferences about their internal states using our own minds as models (i.e., mentalizing). This process relies on the rapid, automatic perception of a mind. In two experiments, we investigate whether these automatic processes of mind perception are impaired when interacting with agents that are not easily classified as human or robot. We hypothesized that an agent falling on the category boundary between human and non-human (i.e., a humanoid) would be difficult to categorize and that this ambiguity would result in increased cognitive load. In Experiment 1, participants rated agents with varying degrees of humanness in terms of their ability to have internal states. The humanoid agent was perceived as categorically ambiguous, as evidenced by the intermediate ratings of internal states. In Experiment 2, participants categorized each agent as human or non-human in a forced-choice task. The humanoid agent produced less consensus and longer reaction times, indicating that the humanoid’s ambiguous mind status produces a cognitive conflict. These findings suggest that performance is negatively impacted by the presence of agents that cannot be easily classified as human or non-human.