Do You Forgive Past Mistakes of Animated Agents? A Study of Instances of Assistance by Animated Agents
Many studies on human–computer interaction have demonstrated that the visual appearance of an agent or a robot significantly influences people’s perceptions and behaviors. Several studies on the appearance of agents/robots have concluded that consistency between expectations from an agent’s or a robot’s appearance and performances was an important factor to the continuous use of these agents/robots. This is because users would stop interacting with the agents/robots when predictions are not met by actual experiences. However, previous studies mainly focused on the consistency between an initial expectation and a performance of a single instance of a task. The influence of the orders of successes or failures for more than one instance of a task has not been examined in detail. Therefore, in this study, we investigate the order effects of how the timing of sufficient or insufficient results of animated agents affects user evaluation. This will lead to the contribution to fill the lack of studies regarding more than one task in the field of human–computer interaction and to realize the continuous use of agents/robots as long as possible and to avoid stopping to use the agents/robots owing to their successful design. We create a simulated retrieval website and conduct an experiment using retrieval assistant agents that show both sufficient and insufficient results for more than one instance of retrieval tasks. The experimental results demonstrated a recency effect wherein the users significantly revised their evaluations of the animated agents based on new information more than that based on initial evaluations. The investigation of the case of repeated instances of a task and the influence of successes or failures is important for designing intelligent agents that may show incomplete results in intelligent tasks. Furthermore, the result of this study will contribute to build strategies to design behaviors of agents/robots that have a high or low evaluation based on their appearance in advance to prevent users from stopping use of the agents/robots.