scholarly journals Cognitive load increases anthropomorphism of humanoid robot. The automatic path of anthropomorphism.

2021 ◽  
Author(s):  
Nicolas Spatola ◽  
Thierry Chaminade

Humanoid robots are predicted to be increasingly present in the everyday life of millions of people worldwide. Humans make sense these artificial agents’ actions mainly through the attribution of human characteristics, a process called anthropomorphism. However, despite a large number of studies, how the representation of artificial agents is constructed remains an open question. Here, we aim at integrating the process of anthropomorphism into the cognitive control theory, that postulates that we adapt resources management for information processing according to the current situation. In three experiments, we manipulated the cognitive load of participants while being observed by a humanoid robot to investigate how it could impact the online adaptation of the mental representation of the robot. The first two experiments indicated an online control of demanding resources in order to switch from an intentional to a physical representation, therefore inhibiting anthropomorphic, i.e. social, inferences. The third experiment investigated how the goals of the observing robot, i.e. “what” versus “why” is the robot observing, influences the effect of the cognitive load, showing that an explicit focus on its intentionality automatically biases cognitive processes towards anthropomorphism, yielding insights on how we mentally represent interacting robots when cognitive control theory and robots’ anthropomorphism are considered together.

2013 ◽  
Vol 14 (3) ◽  
pp. 329-350 ◽  
Author(s):  
Alessandra Sciutti ◽  
Ambra Bisio ◽  
Francesco Nori ◽  
Giorgio Metta ◽  
Luciano Fadiga ◽  
...  

Understanding the goals of others is fundamental for any kind of interpersonal interaction and collaboration. From a neurocognitive perspective, intention understanding has been proposed to depend on an involvement of the observer’s motor system in the prediction of the observed actions (Nyström et al. 2011; Rizzolatti & Sinigaglia 2010; Southgate et al. 2009). An open question is if a similar understanding of the goal mediated by motor resonance can occur not only between humans, but also for humanoid robots. In this study we investigated whether goal-oriented robotic actions can induce motor resonance by measuring the appearance of anticipatory gaze shifts to the goal during action observation. Our results indicate a similar implicit processing of humans’ and robots’ actions and propose to use anticipatory gaze behaviour as a tool for the evaluation of human-robot interactions. Keywords: Humanoid robot; motor resonance; anticipation; proactive gaze; action understanding


2008 ◽  
Vol 5 (4) ◽  
pp. 235-241 ◽  
Author(s):  
Rajesh Elara Mohan ◽  
Carlos Antonio Acosta Calderon ◽  
Changjiu Zhou ◽  
Pik Kong Yue

In the field of human-computer interaction, the Natural Goals, Operators, Methods, and Selection rules Language (NGOMSL) model is one of the most popular methods for modelling knowledge and cognitive processes for rapid usability evaluation. The NGOMSL model is a description of the knowledge that a user must possess to operate the system represented as elementary actions for effective usability evaluations. In the last few years, mobile robots have been exhibiting a stronger presence in commercial markets and very little work has been done with NGOMSL modelling for usability evaluations in the human-robot interaction discipline. This paper focuses on extending the NGOMSL model for usability evaluation of human-humanoid robot interaction in the soccer robotics domain. The NGOMSL modelled human-humanoid interaction design of Robo-Erectus Junior was evaluated and the results of the experiments showed that the interaction design was able to find faults in an average time of 23.84 s. Also, the interaction design was able to detect the fault within the 60 s in 100% of the cases. The Evaluated Interaction design was adopted by our Robo-Erectus Junior version of humanoid robots in the RoboCup 2007 humanoid soccer league.


2021 ◽  
Vol 8 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated when we interact with artificial agents, such as humanoid robots, has been asked. One interesting question is whether humans perceive humanoid robots as mere artefacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their human-like appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artefacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis measuring participants’ pupil dilation during the completion of InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or human-like behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2018 ◽  
Author(s):  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Francesca Ciardo ◽  
Jairo Pérez-Osorio ◽  
Ebru Baykara ◽  
...  

In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behaviour with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance towards an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behaviour of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance towards artificial agents, at least in some contexts.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Davide De Tommaso ◽  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

Humans interpret and predict others’ behaviors by ascribing them intentions or beliefs, or in other words, by adopting the intentional stance. Since artificial agents are increasingly populating our daily environments, the question arises whether (and under which conditions) humans would apply the “human-model” to understand the behaviors of these new social agents. Thus, in a series of three experiments we tested whether embedding humans in a social interaction with a humanoid robot either displaying a human-like or machine-like behavior, would modulate their initial bias towards adopting the intentional stance. Results showed that indeed humans are more prone to adopt the intentional stance after having interacted with a more socially available and human-like robot, while no modulation of the adoption of the intentional stance emerged towards a mechanistic robot. We conclude that short experiences with humanoid robots presumably inducing a “like-me” impression and social bonding increase the likelihood of adopting the intentional stance.


Author(s):  
Giorgio Metta

This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.


2010 ◽  
Vol 07 (01) ◽  
pp. 157-182 ◽  
Author(s):  
HAO GU ◽  
MARCO CECCARELLI ◽  
GIUSEPPE CARBONE

In this paper, problems for an anthropomorphic robot arm are approached for an application in a humanoid robot with the specific features of cost oriented design and user-friendly operation. One DOF solution is proposed by using a suitable combination of gearing systems, clutches, and linkages. Models and dynamic simulations are used both for designing the system and checking the operation feasibility.


2020 ◽  
Vol 12 (1) ◽  
pp. 58-73
Author(s):  
Sofia Thunberg ◽  
Tom Ziemke

AbstractInteraction between humans and robots will benefit if people have at least a rough mental model of what a robot knows about the world and what it plans to do. But how do we design human-robot interactions to facilitate this? Previous research has shown that one can change people’s mental models of robots by manipulating the robots’ physical appearance. However, this has mostly not been done in a user-centred way, i.e. without a focus on what users need and want. Starting from theories of how humans form and adapt mental models of others, we investigated how the participatory design method, PICTIVE, can be used to generate design ideas about how a humanoid robot could communicate. Five participants went through three phases based on eight scenarios from the state-of-the-art tasks in the RoboCup@Home social robotics competition. The results indicate that participatory design can be a suitable method to generate design concepts for robots’ communication in human-robot interaction.


2010 ◽  
Vol 22 (3) ◽  
pp. 437-446 ◽  
Author(s):  
Jane Klemen ◽  
Christian Büchel ◽  
Mira Bühler ◽  
Mareike M. Menz ◽  
Michael Rose

Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences that disruptions to attention can have. According to the load theory of cognitive control, processing of task-irrelevant stimuli is increased by attending in parallel to a relevant task with high cognitive demands. This is due to the relevant task engaging cognitive control resources that are, hence, unavailable to inhibit the processing of task-irrelevant stimuli. However, it has also been demonstrated that a variety of types of load (perceptual and emotional) can result in a reduction of the processing of task-irrelevant stimuli, suggesting a uniform effect of increased load irrespective of the type of load. In the present study, we concurrently presented a relevant auditory matching task [n-back working memory (WM)] of low or high cognitive load (1-back or 2-back WM) and task-irrelevant images at one of three object visibility levels (0%, 50%, or 100%). fMRI activation during the processing of the task-irrelevant visual stimuli was measured in the lateral occipital cortex and found to be reduced under high, compared to low, WM load. In combination with previous findings, this result is suggestive of a more generalized load theory, whereby cognitive load, as well as other types of load (e.g., perceptual), can result in a reduction of the processing of task-irrelevant stimuli, in line with a uniform effect of increased load irrespective of the type of load.


Sign in / Sign up

Export Citation Format

Share Document