scholarly journals EPILEPSY WITH AUDITORY FEATURES: CONTRIBUTION OF KNOWN GENES IN 112 PATIENTS

Seizure ◽  
2020 ◽  
Author(s):  
F. Bisulli ◽  
C. Rinaldi ◽  
T. Pippucci ◽  
R. Minardi ◽  
S. Baldassari ◽  
...  
Keyword(s):  
2019 ◽  
Vol 40 (5) ◽  
pp. 1053-1065 ◽  
Author(s):  
Mathieu Bourguignon ◽  
Martijn Baart ◽  
Efthymia C. Kapnoula ◽  
Nicola Molinaro

1974 ◽  
Vol 2 (4) ◽  
pp. 275-277 ◽  
Author(s):  
Leonard A. Eiserer ◽  
Howard S. Hoffman

2012 ◽  
Vol 25 (0) ◽  
pp. 44
Author(s):  
Valeria Occelli ◽  
Gianluca Esposito ◽  
Paola Venuti ◽  
Peter Walker ◽  
Massimiliano Zampini

The label ‘crossmodal correspondences’ has been used to define the nonarbitrary associations that appear to exist between different basic physical stimulus attributes in different sensory modalities. For instance, it has been consistently shown in the neurotypical population that higher pitched sounds are more frequently matched with visual patterns which are brighter, smaller, and sharper than those associated to lower pitched sounds. Some evidence suggests that patients with ASDs tend not to show this crossmodal preferential association pattern (e.g., curvilinear shapes and labial/lingual consonants vs. rectilinear shapes and plosive consonants). In the present study, we compared the performance of children with ASDs (6–15 years) and matched neurotypical controls in a non-verbal crossmodal correspondence task. The participants were asked to indicate which of two bouncing visual patterns was making a centrally located sound. In intermixed trials, the visual patterns varied in either size, surface brightness, or shape, whereas the sound varied in pitch. The results showed that, whereas the neurotypical controls reliably matched the higher pitched sound to a smaller and brighter visual pattern, the performance of participants with ASDs was at chance level. In the condition where the visual patterns differed in shape, no inter-group difference was observed. Children’s matching performance cannot be attributed to intensity matching or difficulties in understanding the instructions, which were controlled. These data suggest that the tendency to associate congruent visual and auditory features vary as a function of the presence of ASDs, possibly pointing to poorer capabilities to integrate auditory and visual inputs in this population.


2019 ◽  
Vol 39 (17) ◽  
pp. 3292-3300 ◽  
Author(s):  
Emily J. Allen ◽  
Philip C. Burton ◽  
Juraj Mesik ◽  
Cheryl A. Olman ◽  
Andrew J. Oxenham
Keyword(s):  

2012 ◽  
Vol 25 (0) ◽  
pp. 158
Author(s):  
Pawel J. Matusz ◽  
Martin Eimer

We investigated whether top-down attentional control settings can specify task-relevant features in different sensory modalities (vision and audition). Two audiovisual search tasks were used where a spatially uninformative visual singleton cue preceded a target search array. In different blocks, participants searched for a visual target (defined by colour or shape in Experiments 1 and 2, respectively), or target defined by a combination of visual and auditory features (e.g., red target accompanied by a high-pitch tone). Spatial cueing effects indicative of attentional capture by target-matching visual singleton cues in the unimodal visual search task were reduced or completely eliminated when targets were audiovisually defined. The N2pc component (i.e. index attentional target selection in vision) triggered by these cues was reduced and delayed during search for audiovisual as compared to unimodal visual targets. These results provide novel evidence that the top-down control settings which guide attentional selectivity can include perceptual features from different sensory modalities.


Sign in / Sign up

Export Citation Format

Share Document