The influence of emotional salience on gaze behavior in low and high trait empathy: an exploratory eye-tracking study

Author(s):  
Emine Nebi ◽  
Tobias Altmann ◽  
Marcus Roth
2016 ◽  
Vol 45 (4) ◽  
pp. 584-599 ◽  
Author(s):  
Jonna K. Vuoskoski ◽  
Eric F. Clarke ◽  
Tia DeNora

Recent empirical evidence suggests that – like other synchronized, collective actions – making music together with others fosters affiliation and pro-social behaviour. However, it is not yet known whether these effects are limited to active, interpersonal musical participation, or whether solitary music listening can also produce similar effects. This study examines the hypothesis that listening to music from a specific culture can evoke implicit affiliation towards members of that culture more generally. Furthermore, we hypothesized that listeners with high trait empathy would be more susceptible to the effects. Sixty-one participants listened to a track of either Indian or West African popular music, and subsequently completed an Implicit Association Test measuring implicit preference for Indian versus West African people. A significant interaction effect revealed that listeners with high trait empathy were more likely to display an implicit preference for the ethnic group to whose music they were exposed. We argue that music has particular attributes that may foster affective and motor resonance in listeners.


2021 ◽  
Vol 11 (12) ◽  
pp. 5546
Author(s):  
Florian Heilmann ◽  
Kerstin Witte

Visual anticipation is essential for performance in sports. This review provides information on the differences between stimulus presentations and motor responses in eye-tracking studies and considers virtual reality (VR), a new possibility to present stimuli. A systematic literature search on PubMed, ScienceDirect, IEEE Xplore, and SURF was conducted. The number of studies examining the influence of stimulus presentation (in situ, video) is deficient but still sufficient to describe differences in gaze behavior. The seven reviewed studies indicate that stimulus presentations can cause differences in gaze behavior. Further research should focus on displaying game situations via VR. The advantages of a scientific approach using VR are experimental control and repeatability. In addition, game situations could be standardized and movement responses could be included in the analysis.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ulrich Max Schaller ◽  
Monica Biscaldi ◽  
Anna Burkhardt ◽  
Christian Fleischhaker ◽  
Michael Herbert ◽  
...  

Face perception and emotion categorization are widely investigated under laboratory conditions that are devoid of real social interaction. Using mobile eye-tracking glasses in a standardized diagnostic setting while applying the Autism Diagnostic Observation Schedule (ADOS-2), we had the opportunity to record gaze behavior of children and adolescents with and without Autism Spectrum Conditions (ASCs) during social interaction. The objective was to investigate differences in eye-gaze behavior between three groups of children and adolescents either (1) with ASC or (2) with unconfirmed diagnosis of ASC or (3) with neurotypical development (NTD) during social interaction with an adult interviewer in a diagnostic standard situation using the ADOS-2. In a case control study, we used mobile eye-tracking glasses in an ecologically valid and highly standardized diagnostic interview to investigate suspected cases of ASC. After completion of the ASC diagnostic gold standard including the ADOS-2, the participants were assigned to two groups based on their diagnosis (ASC vs. non-ASC) and compared with a matched group of neurotypically developed controls. The primary outcome measure is the percentage of total dwell times assessed for different areas of interest (AOI) with regard to the face and body of a diagnostic interviewer and the surrounding space. Overall, 65 children and adolescents within an age range of 8.3–17.9 years were included in the study. The data revealed significant group differences, especially in the central-face area. Previous investigations under laboratory conditions gave preferential attention to the eye region during face perception to describe differences between ASC and NTD. In this study – using an ecologically valid setting within a standard diagnostic procedure – the results indicate that neurotypically developed controls seem to process faces and facial expressions in a holistic manner originating from the central-face region. Conversely, participants on the Autism Spectrum (tAS) seem to avoid the central-face region and show unsystematic gaze behavior, not using the preferred landing position in the central-face region as the Archimedean point of face perception. This study uses a new approach, and it will be important to replicate these preliminary findings in future research.


2021 ◽  
Author(s):  
Zhong Zhao ◽  
Haiming Tang ◽  
Xiaobin Zhang ◽  
Xingda Qu ◽  
Jianping Lu

BACKGROUND Abnormal gaze behavior is a prominent feature of the autism spectrum disorder (ASD). Previous eye tracking studies had participants watch images (i.e., picture, video and webpage), and the application of machine learning (ML) on these data showed promising results in identify ASD individuals. Given the fact that gaze behavior differs in face-to-face interaction from image viewing tasks, no study has investigated whether natural social gaze behavior could accurately identify ASD. OBJECTIVE The objective of this study was to examine whether and what area of interest (AOI)-based features extracted from the natural social gaze behavior could identify ASD. METHODS Both children with ASD and typical development (TD) were eye-tracked when they were engaged in a face-to-face conversation with an interviewer. Four ML classifiers (support vector machine, SVM; linear discriminant analysis, LDA; decision tree, DT; and random forest, RF) were used to determine the maximum classification accuracy and the corresponding features. RESULTS A maximum classification accuracy of 84.62% were achieved with three classifiers (LDA, DT and RF). Results showed that the mouth, but not the eyes AOI, was a powerful feature in detecting ASD. CONCLUSIONS Natural gaze behavior could be leveraged to identify ASD, suggesting that ASD might be objectively screened with eye tracking technology in everyday social interaction. In addition, the comparison between our and previous findings suggests that eye tracking features that could identify ASD might be culture dependent and context sensitive.


2015 ◽  
Vol 68 (1) ◽  
pp. 95-101 ◽  
Author(s):  
Erik Wästlund ◽  
Tobias Otterbring ◽  
Anders Gustafsson ◽  
Poja Shams

2020 ◽  
Vol 41 (8/9) ◽  
pp. 617-629
Author(s):  
Sho Sato ◽  
Yukari Eto ◽  
Kotomi Iwaki ◽  
Tadashi Oyanagi ◽  
Yu Yasuma

PurposeThis study aimed to understand better the user gaze behavior on bookshelves using eye-tracking technology.Design/methodology/approachAn eye-tracking experiment in a public library with 11 participants was performed. The impact of vertical shelf location of books on the number of times the books are looked at, the impact of horizontal location and the relationship between user behavior and location impact were examined by the findings.FindingsThe results showed that the vertical location of books has a significant impact on the number of times the books are looked at. More than 80% of the time spent looking at bookshelves was spent on books on the top to fourth rows. It was also revealed that the horizontal location of books has a little impact. Books located on the left side of shelves will be looked at significantly more often than those on the right side. No significant relationships between type of user behaviors and location impact were observed.Originality/valueThe study explored the impact of the vertical location of books on time spent looking at bookshelves using eye-tracking methodology. Few published studies do such experiments to address user gaze behavior on bookshelves. The study explored that the vertical location of books has a great impact, and horizontal location has a little impact on user gaze behavior.


2020 ◽  
pp. 073563312097861
Author(s):  
Marko Pejić ◽  
Goran Savić ◽  
Milan Segedinac

This study proposes a software system for determining gaze patterns in on-screen testing. The system applies machine learning techniques to eye-movement data obtained from an eye-tracking device to categorize students according to their gaze behavior pattern while solving an on-screen test. These patterns are determined by converting eye movement coordinates into a sequence of regions of interest. The proposed software system extracts features from the sequence and performs clustering that groups students by their gaze pattern. To determine gaze patterns, the system contains components for communicating with an eye-tracking device, collecting and preprocessing students’ gaze data, and visualizing data using different presentation methods. This study presents a methodology to determine gaze patterns and the implementation details of the proposed software. The research was evaluated by determining the gaze patterns of 51 undergraduate students who took a general knowledge test containing 20 questions. This study aims to provide a software infrastructure that can use students’ gaze patterns as an additional indicator of their reading behaviors and their processing attention or difficulty, among other factors.


Author(s):  
Sarah Malone ◽  
Roland Brünken

Objective The aim of the current study was to compare the traditional, verbal, and motoric tasks regarding their contributions to hazard perception measurement. Background Traditional hazard perception tasks require the participants to respond to filmed traffic conflicts in an imprecise way, such as by pressing a button. More sophisticated tasks include either verbal specification or motoric localization of the perceived hazards. The present study investigated the participants’ gaze behavior when they were provided with an identical set of traffic animations but were instructed to perform one of three types of hazard perception tasks. Method In an eye tracking study, 69 drivers were shown animated traffic scenarios and instructed to perform the traditional (press button), verbal, or speeded motoric localization hazard perception task. Eye tracking revealed whether and when the participant had fixated a certain hazard cue. Results The participants in the traditional task group were slower to fixate emerging hazards, but quicker to respond to them than the participants of the verbal and the motoric groups. As a specific benefit, the verbal task differentiated between different types of failures. Conclusion Additional verbal or speeded motoric localization tasks seem to have increased the participants’ alertness when watching the animations. The verbal task provides valuable additional information regarding the participants’ performance. To approximate real-life hazard perception ability, it is recommended that researchers and practitioners use a combination of different hazard perception tasks for assessment and training.


2018 ◽  
Vol 7 (3.22) ◽  
pp. 5
Author(s):  
Norlyiana Samsuri ◽  
Faruque Reza ◽  
Tahamina Begum ◽  
Nasir Yusoff ◽  
Badrisyah Idris ◽  
...  

This study aims to detect the level of attention and gaze behavior on diverse display design of advertisement through the applications of Event Related Potential (ERP) and Eye Tracking. Total of 15 subjects participated in ERP, while two subjects (out of 15) participated in Eye Tracking. The N200 ERP component was recorded using 128-sensor net. The result of the ERP and gaze behavior showed consistent results that subjects were more attentive to the Vertex Reared Grouped (VRG) view compared to Vertex Frontal Grouped (VFG) view, and more attentive to the Right Lateral Grouped (RLG) view compared to Left Lateral Grouped (LLG) view in Session 1 and Session 3 respectively. Visual interpretation of scan path together with fixation duration and saccade duration of gaze behavior data revealed that VRG view and RLG view attracted more attention than its counterpart. For Session 2, the results of the ERP and gaze behavior indicated that subjects were equally attentive to the Right Lateral Singular (RLS) view and Left Lateral Singular (LLS) view. Gaze behavior showed that scan path during RLS view and LLS view indicated similar amount of gaze in both display designs. For cost-effective and limited space advertising, it is advisable that marketers prioritize display design of VRG view than its counterpart and RLG view than its counterpart. 


Sign in / Sign up

Export Citation Format

Share Document