Predicting Readers’ Sarcasm Understandability by Modeling Gaze Behavior

Author(s):  
Abhijit Mishra ◽  
Pushpak Bhattacharyya
Keyword(s):  
2006 ◽  
Author(s):  
S. Chartier ◽  
P. Renaud ◽  
S. Bouchard ◽  
J. Proulx ◽  
J. L. Rouleau ◽  
...  

2013 ◽  
Vol 9 (2) ◽  
pp. 173-186 ◽  
Author(s):  
Mari Wiklund

Asperger syndrome (AS) is a form of high-functioning autism characterized by qualitative impairment in social interaction. People afflicted with AS typically have abnormal nonverbal behaviors which are often manifested by avoiding eye contact. Gaze constitutes an important interactional resource, and an AS person’s tendency to avoid eye contact may affect the fluidity of conversations and cause misunderstandings. For this reason, it is important to know the precise ways in which this avoidance is done, and in what ways it affects the interaction. The objective of this article is to describe the gaze behavior of preadolescent AS children in institutional multiparty conversations. Methodologically, the study is based on conversation analysis and a multimodal study of interaction. The findings show that three main patterns are used for avoiding eye contact: 1) fixing one’s gaze straight ahead; 2) letting one’s gaze wander around; and 3) looking at one’s own hands when speaking. The informants of this study do not look at the interlocutors at all in the beginning or the middle of their turn. However, sometimes they turn to look at the interlocutors at the end of their turn. This proves that these children are able to use gaze as a source of feedback. When listening, looking at the speaker also seems to be easier for them than looking at the listeners when speaking.


2021 ◽  
Vol 11 (2) ◽  
pp. 218
Author(s):  
Seungji Lee ◽  
Doyoung Lee ◽  
Hyunjae Gil ◽  
Ian Oakley ◽  
Yang Seok Cho ◽  
...  

Searching familiar faces in the crowd may involve stimulus-driven attention by emotional significance, together with goal-directed attention due to task-relevant needs. The present study investigated the effect of familiarity on attentional processes by exploring eye fixation-related potentials (EFRPs) and eye gazes when humans searched for, among other distracting faces, either an acquaintance’s face or a newly-learned face. Task performance and gaze behavior were indistinguishable for identifying either faces. However, from the EFRP analysis, after a P300 component for successful search of target faces, we found greater deflections of right parietal late positive potentials in response to newly-learned faces than acquaintance’s faces, indicating more involvement of goal-directed attention in processing newly-learned faces. In addition, we found greater occipital negativity elicited by acquaintance’s faces, reflecting emotional responses to significant stimuli. These results may suggest that finding a familiar face in the crowd would involve lower goal-directed attention and elicit more emotional responses.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jonas Andersson ◽  
Azra Habibovic ◽  
Daban Rizgary

Abstract To explore driver behavior in highly automated vehicles (HAVs), independent researchers are mainly conducting short experiments. This limits the ability to explore drivers’ behavioral changes over time, which is crucial when research has the intention to reveal human behavior beyond the first-time use. The current paper shows the methodological importance of repeated testing in experience and behavior related studies of HAVs. The study combined quantitative and qualitative data to capture effects of repeated interaction between drivers and HAVs. Each driver ( n = 8 n=8 ) participated in the experiment on two different occasions (∼90 minutes) with one-week interval. On both occasions, the drivers traveled approximately 40 km on a rural road at AstaZero proving grounds in Sweden and encountered various traffic situations. The participants could use automated driving (SAE level 4) or choose to drive manually. Examples of data collected include gaze behavior, perceived safety, as well as interviews and questionnaires capturing general impressions, trust and acceptance. The analysis shows that habituation effects were attenuated over time. The drivers went from being exhilarated on the first occasion, to a more neutral behavior on the second occasion. Furthermore, there were smaller variations in drivers’ self-assessed perceived safety on the second occasion, and drivers were faster to engage in non-driving related activities and become relaxed (e. g., they spent more time glancing off road and could focus more on non-driving related activities such as reading). These findings suggest that exposing drivers to HAVs on two (or more) successive occasions may provide more informative and realistic insights into driver behavior and experience as compared to only one occasion. Repeating an experiment on several occasions is of course a balance between the cost and added value, and future research should investigate in more detail which studies need to be repeated on several occasions and to what extent.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 167
Author(s):  
Patricia Wollstadt ◽  
Martina Hasenjäger ◽  
Christiane B. Wiebel-Herboth

Entropy-based measures are an important tool for studying human gaze behavior under various conditions. In particular, gaze transition entropy (GTE) is a popular method to quantify the predictability of a visual scanpath as the entropy of transitions between fixations and has been shown to correlate with changes in task demand or changes in observer state. Measuring scanpath predictability is thus a promising approach to identifying viewers’ cognitive states in behavioral experiments or gaze-based applications. However, GTE does not account for temporal dependencies beyond two consecutive fixations and may thus underestimate the actual predictability of the current fixation given past gaze behavior. Instead, we propose to quantify scanpath predictability by estimating the active information storage (AIS), which can account for dependencies spanning multiple fixations. AIS is calculated as the mutual information between a processes’ multivariate past state and its next value. It is thus able to measure how much information a sequence of past fixations provides about the next fixation, hence covering a longer temporal horizon. Applying the proposed approach, we were able to distinguish between induced observer states based on estimated AIS, providing first evidence that AIS may be used in the inference of user states to improve human–machine interaction.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2021 ◽  
Vol 15 (1) ◽  
Author(s):  
Yuko Ishizaki ◽  
Takahiro Higuchi ◽  
Yoshitoki Yanagimoto ◽  
Hodaka Kobayashi ◽  
Atsushi Noritake ◽  
...  

Abstract Background Children with autism spectrum disorder (ASD) may experience difficulty adapting to daily life in a preschool or school settings and are likely to develop psychosomatic symptoms. For a better understanding of the difficulties experienced daily by preschool children and adolescents with ASD, this study investigated differences in eye gaze behavior in the classroom environment between children with ASD and those with typical development (TD). Methods The study evaluated 30 children with ASD and 49 children with TD. Participants were presented with images of a human face and a classroom scene. While they gazed at specific regions of visual stimuli, eye tracking with an iView X system was used to evaluate and compare the duration of gaze time between the two groups. Results Compared with preschool children with TD, preschool children with ASD spent less time gazing at the eyes of the human face and the object at which the teacher pointed in the classroom image. Preschool children with TD who had no classroom experience tended to look at the object the teacher pointed at in the classroom image. Conclusion Children with ASD did not look at the human eyes in the facial image or the object pointed at in the classroom image, which may indicate their inability to analyze situations, understand instruction in a classroom, or act appropriately in a group. This suggests that this gaze behavior of children with ASD causes social maladaptation and psychosomatic symptoms. A therapeutic approach that focuses on joint attention is desirable for improving the ability of children with ASD to adapt to their social environment.


Sign in / Sign up

Export Citation Format

Share Document