Variation in attention at encoding: Insights from pupillometry and eye gaze fixations.

2020 ◽  
Vol 46 (12) ◽  
pp. 2277-2294 ◽  
Author(s):  
Ashley L. Miller ◽  
Nash Unsworth
Keyword(s):  
Eye Gaze ◽  
2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.


2021 ◽  
Author(s):  
Daniel K Bjornn ◽  
Julie Van ◽  
Brock Kirwan

Pattern separation and pattern completion are generally studied in humans using mnemonic discrimination tasks such as the Mnemonic Similarity Task (MST) where participants identify similar lures and repeated items from a series of images. Failures to correctly discriminate lures are thought to reflect a failure of pattern separation and a propensity toward pattern completion. Recent research has challenged this perspective, suggesting that poor encoding rather than pattern completion accounts for the occurrence of false alarm responses to similar lures. In two experiments, participants completed a continuous recognition task version of the MST while eye movement (Experiment 1 and 2) and fMRI data (Experiment 2) were collected. While we replicated the result that fixation counts at study predicted accuracy on lure trials, we found that target-lure similarity was a much stronger predictor of accuracy on lure trials across both experiments. Lastly, we found that fMRI activation changes in the hippocampus were significantly correlated with the number of fixations at study for correct but not incorrect mnemonic discrimination judgments when controlling for target-lure similarity. Our findings indicate that while eye movements during encoding predict subsequent hippocampal activation changes, mnemonic discrimination performance is better described by pattern separation and pattern completion processes that are influenced by target-lure similarity than simply poor encoding.


2017 ◽  
Vol 2017 (2) ◽  
pp. 23-37 ◽  
Author(s):  
Yousra Javed ◽  
Mohamed Shehab

Abstract Habituation is a key factor behind the lack of attention towards permission authorization dialogs during third party application installation. Various solutions have been proposed to combat the problem of achieving attention switch towards permissions. However, users continue to ignore these dialogs, and authorize dangerous permissions, which leads to security and privacy breaches. We leverage eye-tracking to approach this problem, and propose a mechanism for enforcing user attention towards application permissions before users are able to authorize them. We deactivate the dialog’s decision buttons initially, and use feedback from the eye-tracker to ensure that the user has looked at the permissions. After determining user attention, the buttons are activated. We implemented a prototype of our approach as a Chrome browser extension, and conducted a user study on Facebook’s application authorization dialogs. Using participants’ permission identification, eye-gaze fixations, and authorization decisions, we evaluate participants’ attention towards permissions. The participants who used our approach on authorization dialogs were able to identify the permissions better, compared to the rest of the participants, even after the habituation period. Their average number of eye-gaze fixations on the permission text was significantly higher than the other group participants. However, examining the rate in which participants denied a dangerous and unnecessary permission, the hypothesized increase from the control group to the treatment group was not statistically significant.


2021 ◽  
Vol 5 (12) ◽  
pp. 77
Author(s):  
Mirjam de Haas ◽  
Paul Vogt ◽  
Emiel Krahmer

In this paper, we examine to what degree children of 3–4 years old engage with a task and with a social robot during a second-language tutoring lesson. We specifically investigated whether children’s task engagement and robot engagement were influenced by three different feedback types by the robot: adult-like feedback, peer-like feedback and no feedback. Additionally, we investigated the relation between children’s eye gaze fixations and their task engagement and robot engagement. Fifty-eight Dutch children participated in an English counting task with a social robot and physical blocks. We found that, overall, children in the three conditions showed similar task engagement and robot engagement; however, within each condition, they showed large individual differences. Additionally, regression analyses revealed that there is a relation between children’s eye-gaze direction and engagement. Our findings showed that although eye gaze plays a significant role in measuring engagement and can be used to model children’s task engagement and robot engagement, it does not account for the full concept and engagement still comprises more than just eye gaze.


2014 ◽  
Vol 23 (1) ◽  
pp. 42-54 ◽  
Author(s):  
Tanya Rose Curtis

As the field of telepractice grows, perceived barriers to service delivery must be anticipated and addressed in order to provide appropriate service delivery to individuals who will benefit from this model. When applying telepractice to the field of AAC, additional barriers are encountered when clients with complex communication needs are unable to speak, often present with severe quadriplegia and are unable to position themselves or access the computer independently, and/or may have cognitive impairments and limited computer experience. Some access methods, such as eye gaze, can also present technological challenges in the telepractice environment. These barriers can be overcome, and telepractice is not only practical and effective, but often a preferred means of service delivery for persons with complex communication needs.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


2006 ◽  
Author(s):  
Christopher R. Jones ◽  
Russell H. Fazio ◽  
Michael Olson

Sign in / Sign up

Export Citation Format

Share Document