Can infants use robot gaze for object learning?

2013 ◽  
Vol 14 (3) ◽  
pp. 351-365 ◽  
Author(s):  
Yuko Okumura ◽  
Yasuhiro Kanakogi ◽  
Takayuki Kanda ◽  
Hiroshi Ishiguro ◽  
Shoji Itakura

Previous research has shown that although infants follow the gaze direction of robots, robot gaze does not facilitate infants’ learning for objects. The present study examined whether robot gaze affects infants’ object learning when the gaze behavior was accompanied by verbalizations. Twelve-month-old infants were shown videos in which a robot with accompanying verbalizations gazed at an object. The results showed that infants not only followed the robot’s gaze direction but also preferentially attended to the cued object when the ostensive verbal signal was present. Moreover, infants showed enhanced processing of the cued object when ostensive and referential verbal signals were increasingly present. These effects were not observed when mere nonverbal sound stimuli instead of verbalizations were added. Taken together, our findings indicate that robot gaze accompanying verbalizations facilitates infants’ object learning, suggesting that verbalizations are important in the design of robot agents from which infants can learn. Keywords: gaze following; humanoid robot; infant learning; verbalization; cognitive development

2019 ◽  
Vol 52 (3) ◽  
pp. 1044-1055
Author(s):  
Marie-Luise Brandi ◽  
Daniela Kaifel ◽  
Juha M. Lahnakoski ◽  
Leonhard Schilbach

Abstract Sense of agency describes the experience of being the cause of one’s own actions and the resulting effects. In a social interaction, one’s actions may also have a perceivable effect on the actions of others. In this article, we refer to the experience of being responsible for the behavior of others as social agency, which has important implications for the success or failure of social interactions. Gaze-contingent eyetracking paradigms provide a useful tool to analyze social agency in an experimentally controlled manner, but the current methods are lacking in terms of their ecological validity. We applied this technique in a novel task using video stimuli of real gaze behavior to simulate a gaze-based social interaction. This enabled us to create the impression of a live interaction with another person while being able to manipulate the gaze contingency and congruency shown by the simulated interaction partner in a continuous manner. Behavioral data demonstrated that participants believed they were interacting with a real person and that systematic changes in the responsiveness of the simulated partner modulated the experience of social agency. More specifically, gaze contingency (temporal relatedness) and gaze congruency (gaze direction relative to the participant’s gaze) influenced the explicit sense of being responsible for the behavior of the other. In general, our study introduces a new naturalistic task to simulate gaze-based social interactions and demonstrates that it is suitable to studying the explicit experience of social agency.


2020 ◽  
Vol 2020 (9) ◽  
pp. 288-1-288-8 ◽  
Author(s):  
Anjali K. Jogeshwar ◽  
Gabriel J. Diaz ◽  
Susan P. Farnand ◽  
Jeff B. Pelz

Eye tracking is used by psychologists, neurologists, vision researchers, and many others to understand the nuances of the human visual system, and to provide insight into a person’s allocation of attention across the visual environment. When tracking the gaze behavior of an observer immersed in a virtual environment displayed on a head-mounted display, estimated gaze direction is encoded as a three-dimensional vector extending from the estimated location of the eyes into the 3D virtual environment. Additional computation is required to detect the target object at which gaze was directed. These methods must be robust to calibration error or eye tracker noise, which may cause the gaze vector to miss the target object and hit an incorrect object at a different distance. Thus, the straightforward solution involving a single vector-to-object collision could be inaccurate in indicating object gaze. More involved metrics that rely upon an estimation of the angular distance from the ray to the center of the object must account for an object’s angular size based on distance, or irregularly shaped edges - information that is not made readily available by popular game engines (e.g. Unity© /Unreal© ) or rendering pipelines (OpenGL). The approach presented here avoids this limitation by projecting many rays distributed across an angular space that is centered upon the estimated gaze direction.


2019 ◽  
Author(s):  
H. Ramezanpour ◽  
P. Thier

AbstractFaces attract the observer’s attention towards objects and locations of interest for the other, thereby allowing the two agents to establish joint attention. Previous work has delineated a network of cortical “patches” in the macaque cortex, processing faces, eventually also extracting information on the other’s gaze direction. Yet, the neural mechanism that links information on gaze direction, guiding the observer’s attention to the relevant object has remained elusive. Here we present electrophysiological evidence for the existence of a distinct “gaze-following patch (GFP)” with neurons that establish this linkage in a highly flexible manner. The other’s gaze and the object, singled out by the gaze, are linked only if this linkage is pertinent within the prevailing social context. The properties of these neurons establish the GFP as a key switch in controlling social interactions based on the other’s gaze.One Sentence SummaryNeurons in a “gaze-following patch” in the posterior temporal cortex orchestrate the flexible linkage between the other’s gaze and objects of interest to both, the other and the observer.


2020 ◽  
Vol 117 (5) ◽  
pp. 2663-2670 ◽  
Author(s):  
Hamidreza Ramezanpour ◽  
Peter Thier

Faces attract the observer’s attention toward objects and locations of interest for the other, thereby allowing the two agents to establish joint attention. Previous work has delineated a network of cortical “patches” in the macaque cortex, processing faces, eventually also extracting information on the other’s gaze direction. Yet, the neural mechanism that links information on gaze direction, guiding the observer’s attention to the relevant object, has remained elusive. Here we present electrophysiological evidence for the existence of a distinct “gaze-following patch” (GFP) with neurons that establish this linkage in a highly flexible manner. The other’s gaze and the object, singled out by the gaze, are linked only if this linkage is pertinent within the prevailing social context. The properties of these neurons establish the GFP as a key switch in controlling social interactions based on the other’s gaze.


2021 ◽  
Author(s):  
Alex S. Mearing ◽  
Judith M. Burkart ◽  
Jacob Dunn ◽  
Sally E. Street ◽  
Kathelijne Koops

Primate gaze following behaviors are of great interest to evolutionary scientists studying social cognition. The ability of an organism to determine a conspecifics likely intentions from their gaze direction may confer an advantage to individuals in a social group. This advantage could be cooperative and/or competitive. Humans are unusual in possessing depigmented sclerae whereas most other extant primates, including the closely related chimpanzee, possess dark scleral pigment. The origins of divergent scleral morphologies are currently unclear, though human white sclerae are often assumed to underlie our hyper-cooperative behaviors. Here, we use phylogenetic generalized least squares (PGLS) analyses with previously generated species-level scores of proactive prosociality, social tolerance (both n=15 primate species), and conspecific lethal aggression (n=108 primate species) to provide the first quantitative, comparative test of three complementary hypotheses. The cooperative eye and self-domestication explanations predict white sclerae to be associated with cooperative, rather than competitive, environments. The gaze camouflage hypothesis predicts that dark scleral pigment functions as gaze direction camouflage in competitive social environments. We show that white sclerae in primates are associated with increased cooperative behaviors whereas dark sclerae are associated with reduced cooperative behaviors and increased intra-specific lethal aggression. Our results lend support to all three hypotheses of scleral evolution, suggesting that primate scleral morphologies evolve in relation to variation in social environment.


2013 ◽  
Vol 9 (2) ◽  
pp. 173-186 ◽  
Author(s):  
Mari Wiklund

Asperger syndrome (AS) is a form of high-functioning autism characterized by qualitative impairment in social interaction. People afflicted with AS typically have abnormal nonverbal behaviors which are often manifested by avoiding eye contact. Gaze constitutes an important interactional resource, and an AS person’s tendency to avoid eye contact may affect the fluidity of conversations and cause misunderstandings. For this reason, it is important to know the precise ways in which this avoidance is done, and in what ways it affects the interaction. The objective of this article is to describe the gaze behavior of preadolescent AS children in institutional multiparty conversations. Methodologically, the study is based on conversation analysis and a multimodal study of interaction. The findings show that three main patterns are used for avoiding eye contact: 1) fixing one’s gaze straight ahead; 2) letting one’s gaze wander around; and 3) looking at one’s own hands when speaking. The informants of this study do not look at the interlocutors at all in the beginning or the middle of their turn. However, sometimes they turn to look at the interlocutors at the end of their turn. This proves that these children are able to use gaze as a source of feedback. When listening, looking at the speaker also seems to be easier for them than looking at the listeners when speaking.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


Author(s):  
Yuki Okafuji ◽  
Takahiro Wada ◽  
Toshihito Sugiura ◽  
Kazuomi Murakami ◽  
Hiroyuki Ishida

Drivers’ gaze behaviors in naturalistic and simulated driving tasks have been investigated for decades. Many studies focus on driving environment to explain a driver’s gaze. However, if there is a great need to use compensatory steering for lane-keeping, drivers could preferentially acquire information directly required for the task. Therefore, we assumed that a driver’s gaze behavior was influenced not only by the environment but also the vehicle position, especially the lateral position. To verify our hypothesis, we carried out a long-time driving simulator experiment, and the gaze behaviors of two participating drivers were analyzed. Results showed that gaze behavior—the fixation distance and the lateral deviation of the fixation—was influenced by the lateral deviation of the vehicle. Consequently, we discussed processes that determined drivers’ gaze behaviors.


Sign in / Sign up

Export Citation Format

Share Document