scholarly journals Preferential use of local visual information in individuals with many autistic traits

2018 ◽  
Vol 18 (10) ◽  
pp. 406
Author(s):  
Arjen Alink ◽  
Ian Charest
2020 ◽  
Vol 10 (7) ◽  
pp. 418 ◽  
Author(s):  
Valentina Bianco ◽  
Alessandra Finisguerra ◽  
Sonia Betti ◽  
Giulia D’Argenio ◽  
Cosimo Urgesi

Autism is associated with difficulties in making predictions based on contextual cues. Here, we investigated whether the distribution of autistic traits in the general population, as measured through the Autistic Quotient (AQ), is associated with alterations of context-based predictions of social and non-social stimuli. Seventy-eight healthy participants performed a social task, requiring the prediction of the unfolding of an action as interpersonal (e.g., to give) or individual (e.g., to eat), and a non-social task, requiring the prediction of the appearance of a moving shape as a short (e.g., square) or a long (e.g., rectangle) figure. Both tasks consisted of (i) a familiarization phase, in which the association between each stimulus type and a contextual cue was manipulated with different probabilities of co-occurrence, and (ii) a testing phase, in which visual information was impoverished by early occlusion of video display, thus forcing participants to rely on previously learned context-based associations. Findings showed that the prediction of both social and non-social stimuli was facilitated when embedded in high-probability contexts. However, only the contextual modulation of non-social predictions was reduced in individuals with lower ‘Attention switching’ abilities. The results provide evidence for an association between weaker context-based expectations of non-social events and higher autistic traits.


2021 ◽  
pp. 1-17
Author(s):  
Yuta Ujiie ◽  
Kohske Takahashi

Abstract While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin’s vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hearing and vision. All participants completed three speech recognition tasks (visual, auditory, and audiovisual stimuli) and the AQ–Japanese version. The results showed that accuracies of speech recognition for visual (i.e., lip-reading) and auditory stimuli were not significantly related to participants’ AQ. In contrast, audiovisual speech perception was less susceptible to facial speech perception among individuals with high rather than low autistic traits. The weaker influence of visual information on audiovisual speech perception in autism spectrum disorder (ASD) was robust regardless of the clarity of the visual information, suggesting a difficulty in the process of audiovisual integration rather than in the visual processing of facial speech.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document