scholarly journals Theory of mind affects the interpretation of another person's focus of attention

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jessica Dawson ◽  
Alan Kingstone ◽  
Tom Foulsham

AbstractPeople are drawn to social, animate things more than inanimate objects. Previous research has also shown gaze following in humans, a process that has been linked to theory of mind (ToM). In three experiments, we investigated whether animacy and ToM are involved when making judgements about the location of a cursor in a scene. In Experiment 1, participants were told that this cursor represented the gaze of an observer and were asked to decide whether the observer was looking at a target object. This task is similar to that carried out by researchers manually coding eye-tracking data. The results showed that participants were biased to perceive the gaze cursor as directed towards animate objects (faces) compared to inanimate objects. In Experiments 2 and 3 we tested the role of ToM, by presenting the same scenes to new participants but now with the statement that the cursor was generated by a ‘random’ computer system or by a computer system designed to seek targets. The bias to report that the cursor was directed toward faces was abolished in Experiment 2, and minimised in Experiment 3. Together, the results indicate that people attach minds to the mere representation of an individual's gaze, and this attribution of mind influences what people believe an individual is looking at.

2020 ◽  
Vol 10 (13) ◽  
pp. 4508 ◽  
Author(s):  
Armel Quentin Tchanou ◽  
Pierre-Majorique Léger ◽  
Jared Boasen ◽  
Sylvain Senecal ◽  
Jad Adam Taher ◽  
...  

Gaze convergence of multiuser eye movements during simultaneous collaborative use of a shared system interface has been proposed as an important albeit sparsely explored construct in human-computer interaction literature. Here, we propose a novel index for measuring the gaze convergence of user dyads and address its validity through two consecutive eye-tracking studies. Eye-tracking data of user dyads were synchronously recorded while they simultaneously performed tasks on shared system interfaces. Results indicate the validity of the proposed gaze convergence index for measuring the gaze convergence of dyads. Moreover, as expected, our gaze convergence index was positively associated with dyad task performance and negatively associated with dyad cognitive load. These results suggest the utility of (theoretical or practical) applications such as synchronized gaze convergence displays in diverse settings. Further research perspectives, particularly into the construct’s nomological network, are warranted.


2008 ◽  
Vol 35 (1) ◽  
pp. 207-220 ◽  
Author(s):  
RECHELE BROOKS ◽  
ANDREW N. MELTZOFF

ABSTRACTWe found that infant gaze following and pointing predicts subsequent language development. At ages 0 ; 10 or 0 ; 11, infants saw an adult turn to look at an object in an experimental setting. Productive vocabulary was assessed longitudinally through two years of age. Growth curve modeling showed that infants who gaze followed and looked longer at the target object had significantly faster vocabulary growth than infants with shorter looks, even with maternal education controlled; adding infant pointing strengthened the model. We highlight the role of social cognition in word learning and emphasize the communicative-referential functions of early gaze following and pointing.


2011 ◽  
Vol 40 (594) ◽  
Author(s):  
Susanne Bødker

<span style="font-family: Arial; font-size: x-small;"><span style="font-family: Arial; font-size: x-small;"><p>Dual eye-tracking (DUET) is a promising methodology to study and support</p> <p>collaborative work. The method consists of simultaneously recording the gaze of two</p> <p>collaborators working on a common task. The main themes addressed in the workshop</p> <p>are eye-tracking methodology (how to translate gaze measures into descriptions of joint</p> <p>action, how to measure and model gaze alignment between collaborators, how to address</p> <p>task specificity inherent to eye-tracking data) and more generally future applications of</p> <p>dual eye-tracking in CSCW. The DUET workshop will bring together scholars who</p> <p>currently develop the approach as well as a larger audience interested in applications of</p> <p>eye-tracking in collaborative situations. The workshop format will combine paper</p> <p>presentations and discussions. The papers are available online as PDF documents at</p> <p>http://www.dualeyetracking.org/DUET2011/.</p></span></span>


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e3459 ◽  
Author(s):  
Christian Nawroth ◽  
Egle Trincas ◽  
Livio Favaro

Gaze following is widespread among animals. However, the corresponding ultimate functions may vary substantially. Thus, it is important to study previously understudied (or less studied) species to develop a better understanding of the ecological contexts that foster certain cognitive traits. Penguins (Family Spheniscidae), despite their wide interspecies ecological variation, have previously not been considered for cross-species comparisons. Penguin behaviour and communication have been investigated over the last decades, but less is known on how groups are structured, social hierarchies are established, and coordination for hunting and predator avoidance may occur. In this article, we investigated how African penguins (Spheniscus demersus) respond to gaze cues of conspecifics using a naturalistic setup in a zoo environment. Our results provide evidence that members of the family Spheniscidae follow gaze of conspecifics into distant space. However, further tests are necessary to examine if the observed behaviour serves solely one specific function (e.g. predator detection) or is displayed in a broader context (e.g. eavesdropping on relevant stimuli in the environment). In addition, our findings can serve as a starting point for future cross-species comparisons with other members of the penguin family, to further explore the role of aerial predation and social structure on gaze following in social species. Overall, we also suggest that zoo-housed animals represent an ideal opportunity to extend species range and to test phylogenetic families that have not been in the focus of animal cognitive research.


Autism ◽  
2021 ◽  
pp. 136236132110619
Author(s):  
Emilia Thorup ◽  
Pär Nyström ◽  
Sven Bölte ◽  
Terje Falck-Ytter

Children with autism spectrum disorder (ASD) display difficulties with response to joint attention in natural settings but often perform comparably to typically developing (TD) children in experimental studies of gaze following. Previous work comparing infants at elevated likelihood for ASD versus TD infants has manipulated aspects of the gaze cueing stimulus (e.g. eyes only versus head and eyes together), but the role the peripheral object being attended to is not known. In this study of infants at elevated likelihood of ASD ( N = 97) and TD infants ( N = 29), we manipulated whether or not a target object was present in the cued area. Performance was assessed at 10, 14, and 18 months, and diagnostic assessment was conducted at age 3 years. The results showed that although infants with later ASD followed gaze to the same extent as TD infants in all conditions, they displayed faster latencies back to the model’s face when (and only when) a peripheral object was absent. These subtle atypicalities in the gaze behaviors directly after gaze following may implicate a different appreciation of the communicative situation in infants with later ASD, despite their ostensively typical gaze following ability. Lay abstract During the first year of life, infants start to align their attention with that of other people. This ability is called joint attention and facilitates social learning and language development. Although children with autism spectrum disorder (ASD) are known to engage less in joint attention compared to other children, several experimental studies have shown that they follow other’s gaze (a requirement for visual joint attention) to the same extent as other children. In this study, infants’ eye movements were measured at age 10, 14, and 18 months while watching another person look in a certain direction. A target object was either present or absent in the direction of the other person’s gaze. Some of the infants were at elevated likelihood of ASD, due to having an older autistic sibling. At age 3 years, infants were assessed for a diagnosis of ASD. Results showed that infants who met diagnostic criteria at 3 years followed gaze to the same extent as other infants. However, they then looked back at the model faster than typically developing infants when no target object was present. When a target object was present, there was no difference between groups. These results may be in line with the view that directly after gaze following, infants with later ASD are less influenced by other people’s gaze when processing the common attentional focus. The study adds to our understanding of both the similarities and differences in looking behaviors between infants who later receive an ASD diagnosis and other infants.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7668
Author(s):  
Niharika Kumari ◽  
Verena Ruf ◽  
Sergey Mukhametov ◽  
Albrecht Schmidt ◽  
Jochen Kuhn ◽  
...  

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251674
Author(s):  
Thomas A. Busey ◽  
Nicholas Heise ◽  
R. Austin Hicklin ◽  
Bradford T. Ulery ◽  
JoAnn Buscaglia

Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies.


2019 ◽  
Author(s):  
H. Ramezanpour ◽  
P. Thier

AbstractFaces attract the observer’s attention towards objects and locations of interest for the other, thereby allowing the two agents to establish joint attention. Previous work has delineated a network of cortical “patches” in the macaque cortex, processing faces, eventually also extracting information on the other’s gaze direction. Yet, the neural mechanism that links information on gaze direction, guiding the observer’s attention to the relevant object has remained elusive. Here we present electrophysiological evidence for the existence of a distinct “gaze-following patch (GFP)” with neurons that establish this linkage in a highly flexible manner. The other’s gaze and the object, singled out by the gaze, are linked only if this linkage is pertinent within the prevailing social context. The properties of these neurons establish the GFP as a key switch in controlling social interactions based on the other’s gaze.One Sentence SummaryNeurons in a “gaze-following patch” in the posterior temporal cortex orchestrate the flexible linkage between the other’s gaze and objects of interest to both, the other and the observer.


2019 ◽  
Vol 47 (3) ◽  
pp. 533-555 ◽  
Author(s):  
Rowena GARCIA ◽  
Jens ROESER ◽  
Barbara HÖHLE

AbstractWe investigated whether Tagalog-speaking children incrementally interpret the first noun as the agent, even if verbal and nominal markers for assigning thematic roles are given early in Tagalog sentences. We asked five- and seven-year-old children and adult controls to select which of two pictures of reversible actions matched the sentence they heard, while their looks to the pictures were tracked. Accuracy and eye-tracking data showed that agent-initial sentences were easier to comprehend than patient-initial sentences, but the effect of word order was modulated by voice. Moreover, our eye-tracking data provided evidence that, by the first noun phrase, seven-year-old children looked more to the target in the agent-initial compared to the patient-initial conditions, but this word order advantage was no longer observed by the second noun phrase. The findings support language processing and acquisition models which emphasize the role of frequency in developing heuristic strategies (e.g., Chang, Dell, & Bock, 2006).


2020 ◽  
Vol 117 (5) ◽  
pp. 2663-2670 ◽  
Author(s):  
Hamidreza Ramezanpour ◽  
Peter Thier

Faces attract the observer’s attention toward objects and locations of interest for the other, thereby allowing the two agents to establish joint attention. Previous work has delineated a network of cortical “patches” in the macaque cortex, processing faces, eventually also extracting information on the other’s gaze direction. Yet, the neural mechanism that links information on gaze direction, guiding the observer’s attention to the relevant object, has remained elusive. Here we present electrophysiological evidence for the existence of a distinct “gaze-following patch” (GFP) with neurons that establish this linkage in a highly flexible manner. The other’s gaze and the object, singled out by the gaze, are linked only if this linkage is pertinent within the prevailing social context. The properties of these neurons establish the GFP as a key switch in controlling social interactions based on the other’s gaze.


Sign in / Sign up

Export Citation Format

Share Document