Competing Perceptual Salience in a Visual Word Recognition Task Differentially Affects Children With and Without Autism Spectrum Disorder

2020 ◽  
Author(s):  
Courtney E. Venker ◽  
Janine Mathée ◽  
Dominik Neumann ◽  
Jan Edwards ◽  
Jenny Saffran ◽  
...  

Author(s):  
Virginia Carter Leno ◽  
Rachael Bedford ◽  
Susie Chandler ◽  
Pippa White ◽  
Isabel Yorke ◽  
...  

Abstract Research suggests an increased prevalence of callous-unemotional (CU) traits in children with autism spectrum disorder (ASD), and a similar impairment in fear recognition to that reported in non-ASD populations. However, past work has used measures not specifically designed to measure CU traits and has not examined whether decreased attention to the eyes reported in non-ASD populations is also present in individuals with ASD. The current paper uses a measure specifically designed to measure CU traits to estimate prevalence in a large community-based ASD sample. Parents of 189 adolescents with ASD completed questionnaires assessing CU traits, and emotional and behavioral problems. A subset of participants completed a novel emotion recognition task (n = 46). Accuracy, reaction time, total looking time, and number of fixations to the eyes and mouth were measured. Twenty-two percent of youth with ASD scored above a cut-off expected to identify the top 6% of CU scores. CU traits were associated with longer reaction times to identify fear and fewer fixations to the eyes relative to the mouth during the viewing of fearful faces. No associations were found with accuracy or total looking time. Results suggest the mechanisms that underpin CU traits may be similar between ASD and non-ASD populations.



2008 ◽  
Vol 19 (10) ◽  
pp. 998-1006 ◽  
Author(s):  
Janet Hui-wen Hsiao ◽  
Garrison Cottrell

It is well known that there exist preferred landing positions for eye fixations in visual word recognition. However, the existence of preferred landing positions in face recognition is less well established. It is also unknown how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked. We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fixation is just to the left of the center of the nose, and that of the second fixation is around the center of the nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.



2019 ◽  
Vol 52 ◽  
pp. 100858 ◽  
Author(s):  
Yuzhu Ji ◽  
Jing Liu ◽  
Xiao-Qian Zhu ◽  
Jingjing Zhao ◽  
Jiuju Wang ◽  
...  


2017 ◽  
Vol 76 (3) ◽  
pp. 143-150 ◽  
Author(s):  
Vera E. Golimbet ◽  
Zhanna V. Garakh ◽  
Yuliya Zaytseva ◽  
Margarita V. Alfimova ◽  
Tatyana V. Lezheiko ◽  
...  


Autism Spectrum Disorder (ASD) is associated with neurodevelopmental disorders that deter the development of social interaction and communication abilities. Diagnosis of autism is one of the challenging tasks for researchers and doctors since the diagnosisis based on abnormalities in brain functions that may not flourish until the disorder is fully established. Facial expression analysis can be an effective solution for thediagnosis of autism in an early stage by exploiting a child’s expressions using automated systems. This research work aims to recognize autism from facial expressions using a deep learning model. The proposed model was implemented using Convolutional Neural Network (CNN) that is developed by the MobileNet model. The technique transfer learning was also implemented in this research work to amplify the performance of the model. The experimental result showed that the MobileNet model with a transfer learning approach couldprovide satisfactory results in the recognition task by achieving thehighest validation accuracy of 89%and test accuracy of 87%. The F1-score and precision value also advocated the reliability of our recognitionof theapproach by achieving the highest score of 87% for both metrics



Author(s):  
Tai-Ling Liu ◽  
Peng-Wei Wang ◽  
Yi-Hsin Connie Yang ◽  
Gary Chon-Wen Shyi ◽  
Cheng-Fang Yen

Autism spectrum disorder (ASD) is a neurodevelopmental disorder that is characterized by impaired social interaction, communication and restricted and repetitive behavior. Few studies have focused on the effect of facial emotion recognition on bullying involvement among individuals with ASD. The aim of this study was to examine the association between facial emotion recognition and different types of bullying involvement in adolescents with high-functioning ASD. We recruited 138 adolescents aged 11 to 18 years with high-functioning ASD. The adolescents’ experiences of bullying involvement were measured using the Chinese version of the School Bullying Experience Questionnaire. Their facial emotion recognition was measured using the Facial Emotion Recognition Task (which measures six emotional expressions and four degrees of emotional intensity). Logistic regression analysis was used to examine the association between facial emotion recognition and different types of bullying involvement. After controlling for the effects of age, gender, depression, anxiety, inattention, hyperactivity/impulsivity and opposition, we observed that bullying perpetrators performed significantly better on rating the intensity of emotion in the Facial Emotion Recognition Task; bullying victims performed significantly worse on ranking the intensity of facial emotion. The results of this study support the different deficits of facial emotion recognition in various types of bullying involvement among adolescents with high-functioning ASD. The different directions of association between bully involvement and facial emotion recognition must be considered when developing prevention and intervention programs.



2020 ◽  
Vol 61 (3) ◽  
pp. 348-360
Author(s):  
Miguel Lázaro ◽  
Elisa Pérez ◽  
Rosario Martínez


SLEEP ◽  
2020 ◽  
Author(s):  
Eva-Maria Kurz ◽  
Annette Conzelmann ◽  
Gottfried Maria Barth ◽  
Tobias J Renner ◽  
Katharina Zinke ◽  
...  

Abstract Sleep is assumed to support memory through an active systems consolidation process that does not only strengthen newly encoded representations but also facilitates the formation of more abstract gist memories. Studies in humans and rodents indicate a key role of the precise temporal coupling of sleep slow oscillations (SO) and spindles in this process. The present study aimed at bolstering these findings in typically developing (TD) children, and at dissecting particularities in SO-spindle coupling underlying signs of enhanced gist memory formation during sleep found in a foregoing study in children with autism spectrum disorder (ASD) without intellectual impairment. Sleep data from 19 boys with ASD and 20 TD boys (9-12 years) were analyzed. Children performed a picture-recognition task and the Deese-Roediger-McDermott (DRM) task before nocturnal sleep (encoding) and in the next morning (retrieval). Sleep-dependent benefits for visual-recognition memory were comparable between groups but were greater for gist abstraction (recall of DRM critical lure words) in ASD than TD children. Both groups showed a closely comparable SO-spindle coupling, with fast spindle activity nesting in SO-upstates, suggesting that a key mechanism of memory processing during sleep is fully functioning already at childhood. Picture-recognition at retrieval after sleep was positively correlated to frontocortical SO-fast-spindle coupling in TD children, and less in ASD children. Critical lure recall did not correlate with SO-spindle coupling in TD children but showed a negative correlation (r=-.64, p=.003) with parietal SO-fast-spindle coupling in ASD children, suggesting other mechanisms specifically conveying gist abstraction, that may even compete with SO-spindle coupling.



2014 ◽  
Vol 17 ◽  
Author(s):  
María Macaya ◽  
Manuel Perea

AbstractThe study of the effects of typographical factors on lexical access has been rather neglected in the literature on visual-word recognition. Indeed, current computational models of visual-word recognition employ an unrefined letter feature level in their coding schemes. In a letter recognition experiment, Pelli, Burns, Farell, and Moore-Page (2006), letters in Bookman boldface produced more efficiency (i.e., a higher ratio of thresholds of an ideal observer versus a human observer) than the letters in Bookman regular under visual noise. Here we examined whether the effect of bold emphasis can be generalized to a common visual-word recognition task (lexical decision: “is the item a word?”) under standard viewing conditions. Each stimulus was presented either with or without bold emphasis (e.g., actor vs. actor). To help determine the locus of the effect of bold emphasis, word-frequency (low vs. high) was also manipulated. Results revealed that responses to words in boldface were faster than the responses to the words without emphasis –this advantage was restricted to low-frequency words. Thus, typographical features play a non-negligible role during visual-word recognition and, hence, the letter feature level of current models of visual-word recognition should be amended.



Sign in / Sign up

Export Citation Format

Share Document