Emergence of Agent Gaze Behavior using Interactive Kinetics-Based Gaze Direction Model

Author(s):  
Riki Satogata ◽  
Mitsuhiko Kimoto ◽  
Shun Yoshioka ◽  
Masahiko Osawa ◽  
Kazuhiko Shinozawa ◽  
...  
2019 ◽  
Vol 52 (3) ◽  
pp. 1044-1055
Author(s):  
Marie-Luise Brandi ◽  
Daniela Kaifel ◽  
Juha M. Lahnakoski ◽  
Leonhard Schilbach

Abstract Sense of agency describes the experience of being the cause of one’s own actions and the resulting effects. In a social interaction, one’s actions may also have a perceivable effect on the actions of others. In this article, we refer to the experience of being responsible for the behavior of others as social agency, which has important implications for the success or failure of social interactions. Gaze-contingent eyetracking paradigms provide a useful tool to analyze social agency in an experimentally controlled manner, but the current methods are lacking in terms of their ecological validity. We applied this technique in a novel task using video stimuli of real gaze behavior to simulate a gaze-based social interaction. This enabled us to create the impression of a live interaction with another person while being able to manipulate the gaze contingency and congruency shown by the simulated interaction partner in a continuous manner. Behavioral data demonstrated that participants believed they were interacting with a real person and that systematic changes in the responsiveness of the simulated partner modulated the experience of social agency. More specifically, gaze contingency (temporal relatedness) and gaze congruency (gaze direction relative to the participant’s gaze) influenced the explicit sense of being responsible for the behavior of the other. In general, our study introduces a new naturalistic task to simulate gaze-based social interactions and demonstrates that it is suitable to studying the explicit experience of social agency.


2020 ◽  
Vol 2020 (9) ◽  
pp. 288-1-288-8 ◽  
Author(s):  
Anjali K. Jogeshwar ◽  
Gabriel J. Diaz ◽  
Susan P. Farnand ◽  
Jeff B. Pelz

Eye tracking is used by psychologists, neurologists, vision researchers, and many others to understand the nuances of the human visual system, and to provide insight into a person’s allocation of attention across the visual environment. When tracking the gaze behavior of an observer immersed in a virtual environment displayed on a head-mounted display, estimated gaze direction is encoded as a three-dimensional vector extending from the estimated location of the eyes into the 3D virtual environment. Additional computation is required to detect the target object at which gaze was directed. These methods must be robust to calibration error or eye tracker noise, which may cause the gaze vector to miss the target object and hit an incorrect object at a different distance. Thus, the straightforward solution involving a single vector-to-object collision could be inaccurate in indicating object gaze. More involved metrics that rely upon an estimation of the angular distance from the ray to the center of the object must account for an object’s angular size based on distance, or irregularly shaped edges - information that is not made readily available by popular game engines (e.g. Unity© /Unreal© ) or rendering pipelines (OpenGL). The approach presented here avoids this limitation by projecting many rays distributed across an angular space that is centered upon the estimated gaze direction.


2013 ◽  
Vol 14 (3) ◽  
pp. 351-365 ◽  
Author(s):  
Yuko Okumura ◽  
Yasuhiro Kanakogi ◽  
Takayuki Kanda ◽  
Hiroshi Ishiguro ◽  
Shoji Itakura

Previous research has shown that although infants follow the gaze direction of robots, robot gaze does not facilitate infants’ learning for objects. The present study examined whether robot gaze affects infants’ object learning when the gaze behavior was accompanied by verbalizations. Twelve-month-old infants were shown videos in which a robot with accompanying verbalizations gazed at an object. The results showed that infants not only followed the robot’s gaze direction but also preferentially attended to the cued object when the ostensive verbal signal was present. Moreover, infants showed enhanced processing of the cued object when ostensive and referential verbal signals were increasingly present. These effects were not observed when mere nonverbal sound stimuli instead of verbalizations were added. Taken together, our findings indicate that robot gaze accompanying verbalizations facilitates infants’ object learning, suggesting that verbalizations are important in the design of robot agents from which infants can learn. Keywords: gaze following; humanoid robot; infant learning; verbalization; cognitive development


2021 ◽  
Author(s):  
◽  
Christopher Maymon

<p>Three experiments investigated efficient belief tracking as described by the two-systems theory of human mindreading (Apperly & Butterfill, 2009) whereupon mindreading implies the operation of a flexible system that is slow to develop and cognitively effortful, and an efficient system which develops early but subject to signature limits. Signature limits have been evidenced by children’s and adults’ difficulty anticipating how someone with a false belief (FB) about an object’s identity, will act. In a recent investigation of signature limits, erroneous pre-activation of the motor system was detected when adults predicted the actions of an agent with an identity FB, suggesting that efficient mindreading and motor processes are linked (Edwards & Low, 2017). Moreover, young children differentiated between true and FBs about an object’s location, but not identity, as revealed by the object children retrieved in an active helping task (Fizke et al., 2017). The aim of the present thesis was to provide new evidence of signature limits in adults, and of the recent conjecture that efficient mindreading and motor processes interact. In helping tasks, participants’ interpretation of another’s actions is crucial to how they coordinate their helping response. Therefore, an ecologically valid helping task was adapted to investigate the proposed interface between efficient mindreading and motor processes. The present work measured adults’ eye movements made prior to helping, and their helping actions across a set of distinct directional full-body movements (around which side of a desk they swerved, which compartment they approached, toward which compartment they reached, and which object they retrieved). In this way, it was possible to investigate whether gaze direction correlated with full-body movements and whether adults’ gaze differed when the agent’s FB was about an object’s location or identity. Results from Experiment 1 indicated that efficient belief tracking is equipped to process location but not identity FBs, and that - in the location scenario - gaze direction correlated with the immediate stage of participants’ helping action (the direction they swerved). To investigate this correlation further, Experiment 2 drew upon research suggesting that temporarily tying an observer’s hands behind their backs impaired their ability to predict the outcome of hand actions (Ambrosini et al., 2012). Results showed that tying adults’ hands behind their back had a negative effect on their gaze behavior and severed the correlation between gaze and swerving, suggesting that the link between efficient mindreading and motor processes is fragile. Experiment 3 tested an alternative interpretation for Experiment 2’s findings (that restraining participants’ hands applied a domain-general distraction, rather than a specific detriment to belief tracking) by tying up participants’ feet. Results were ambiguous: the gaze behavior of participants whose feet were tied did not differ from those who were unrestrained, nor from those whose hands were bound. These findings support the two-systems theory and provide suggestive evidence of a connection between efficient mindreading and motor processes. However, the investigation highlights new methodological challenges for designing naturalistic helping tasks for adult participants.</p>


2021 ◽  
Author(s):  
◽  
Christopher Maymon

<p>Three experiments investigated efficient belief tracking as described by the two-systems theory of human mindreading (Apperly & Butterfill, 2009) whereupon mindreading implies the operation of a flexible system that is slow to develop and cognitively effortful, and an efficient system which develops early but subject to signature limits. Signature limits have been evidenced by children’s and adults’ difficulty anticipating how someone with a false belief (FB) about an object’s identity, will act. In a recent investigation of signature limits, erroneous pre-activation of the motor system was detected when adults predicted the actions of an agent with an identity FB, suggesting that efficient mindreading and motor processes are linked (Edwards & Low, 2017). Moreover, young children differentiated between true and FBs about an object’s location, but not identity, as revealed by the object children retrieved in an active helping task (Fizke et al., 2017). The aim of the present thesis was to provide new evidence of signature limits in adults, and of the recent conjecture that efficient mindreading and motor processes interact. In helping tasks, participants’ interpretation of another’s actions is crucial to how they coordinate their helping response. Therefore, an ecologically valid helping task was adapted to investigate the proposed interface between efficient mindreading and motor processes. The present work measured adults’ eye movements made prior to helping, and their helping actions across a set of distinct directional full-body movements (around which side of a desk they swerved, which compartment they approached, toward which compartment they reached, and which object they retrieved). In this way, it was possible to investigate whether gaze direction correlated with full-body movements and whether adults’ gaze differed when the agent’s FB was about an object’s location or identity. Results from Experiment 1 indicated that efficient belief tracking is equipped to process location but not identity FBs, and that - in the location scenario - gaze direction correlated with the immediate stage of participants’ helping action (the direction they swerved). To investigate this correlation further, Experiment 2 drew upon research suggesting that temporarily tying an observer’s hands behind their backs impaired their ability to predict the outcome of hand actions (Ambrosini et al., 2012). Results showed that tying adults’ hands behind their back had a negative effect on their gaze behavior and severed the correlation between gaze and swerving, suggesting that the link between efficient mindreading and motor processes is fragile. Experiment 3 tested an alternative interpretation for Experiment 2’s findings (that restraining participants’ hands applied a domain-general distraction, rather than a specific detriment to belief tracking) by tying up participants’ feet. Results were ambiguous: the gaze behavior of participants whose feet were tied did not differ from those who were unrestrained, nor from those whose hands were bound. These findings support the two-systems theory and provide suggestive evidence of a connection between efficient mindreading and motor processes. However, the investigation highlights new methodological challenges for designing naturalistic helping tasks for adult participants.</p>


2021 ◽  
Vol 15 ◽  
Author(s):  
Jan Drewes ◽  
Sascha Feder ◽  
Wolfgang Einhäuser

How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environmental factors such as terrain difficulty to affect gaze; however, real-world settings are difficult to control or replicate. Virtual reality (VR) offers the experimental control of a laboratory, yet approximates freedom and visual complexity of the real world (RW). We measured gaze data in 8 healthy young adults during walking in the RW and simulated locomotion in VR. Participants walked along a pre-defined path inside an office building, which included different terrains such as long corridors and flights of stairs. In VR, participants followed the same path in a detailed virtual reconstruction of the building. We devised a novel hybrid control strategy for movement in VR: participants did not actually translate: forward movements were controlled by a hand-held device, rotational movements were executed physically and transferred to the VR. We found significant effects of terrain type (flat corridor, staircase up, and staircase down) on gaze direction, on the spatial spread of gaze direction, and on the angular distribution of gaze-direction changes. The factor world (RW and VR) affected the angular distribution of gaze-direction changes, saccade frequency, and head-centered vertical gaze direction. The latter effect vanished when referencing gaze to a world-fixed coordinate system, and was likely due to specifics of headset placement, which cannot confound any other analyzed measure. Importantly, we did not observe a significant interaction between the factors world and terrain for any of the tested measures. This indicates that differences between terrain types are not modulated by the world. The overall dwell time on navigational markers did not differ between worlds. The similar dependence of gaze behavior on terrain in the RW and in VR indicates that our VR captures real-world constraints remarkably well. High-fidelity VR combined with naturalistic movement control therefore has the potential to narrow the gap between the experimental control of a lab and ecologically valid settings.


2004 ◽  
Author(s):  
Melanie Lunsford ◽  
Sheena Rogers ◽  
Lars Strother ◽  
Michael Kubovy
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document