scholarly journals Hemispheric interactions during the pre-attentive change detection of different acoustic features in speech and non-speech sounds

Author(s):  
Näätänen R.
NeuroImage ◽  
2019 ◽  
Vol 188 ◽  
pp. 208-216 ◽  
Author(s):  
Jari L.O. Kurkela ◽  
Jarmo A. Hämäläinen ◽  
Paavo H.T. Leppänen ◽  
Hua Shu ◽  
Piia Astikainen

2001 ◽  
Vol 12 (3) ◽  
pp. 459-466 ◽  
Author(s):  
Maria Jaramillo ◽  
Titta Ilvonen ◽  
Teija Kujala ◽  
Paavo Alku ◽  
Mari Tervaniemi ◽  
...  

2019 ◽  
Vol 287 ◽  
pp. 1-9 ◽  
Author(s):  
Derek J. Fisher ◽  
Erica D. Rudolph ◽  
Emma M.L. Ells ◽  
Verner J. Knott ◽  
Alain Labelle ◽  
...  

eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Connie Cheung ◽  
Liberty S Hamilton ◽  
Keith Johnson ◽  
Edward F Chang

In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information.


2016 ◽  
Vol 2 (3) ◽  
pp. 123-137
Author(s):  
Tatiana V. Shuiskaya ◽  
◽  
Svetlana V. Androsova ◽  

2018 ◽  
Author(s):  
A. Lipponen ◽  
J.L.O. Kurkela ◽  
Kyläheiko I. ◽  
Hölttä S. ◽  
T. Ruusuvirta ◽  
...  

AbstractElectrophysiological response termed mismatch negativity (MMN) indexes auditory change detection in humans. An analogous response, called the mismatch response (MMR), is also elicited in animals. Mismatch response has been widely utilized in investigations of change detection in human speech sounds in rats and guinea pigs, but not in mice. Since e.g. transgenic mouse models provide important advantages for further studies, we studied processing of speech sounds in anesthetized mice. Auditory evoked potentials were recorded from the dura above the auditory cortex to changes in duration of a human speech sound /a/. In oddball stimulus condition, the MMR was elicited at 53-259 ms latency in response to the changes. The MMR was found to the large (from 200 ms to 110 ms) but not to smaller (from 200 ms to 120-180 ms) changes in duration. The results suggest that mice can represent human speech sounds in order to detect changes in their duration. The findings can be utilized in future investigations applying mouse models for speech perception.


2011 ◽  
Vol 23 (12) ◽  
pp. 3874-3887 ◽  
Author(s):  
Julie Chobert ◽  
Céline Marie ◽  
Clément François ◽  
Daniele Schön ◽  
Mireille Besson

The aim of this study was to examine the influence of musical expertise in 9-year-old children on passive (as reflected by MMN) and active (as reflected by discrimination accuracy) processing of speech sounds. Musician and nonmusician children were presented with a sequence of syllables that included standards and deviants in vowel frequency, vowel duration, and VOT. Both the passive and the active processing of duration and VOT deviants were enhanced in musician compared with nonmusician children. Moreover, although no effect was found on the passive processing of frequency, active frequency discrimination was enhanced in musician children. These findings are discussed in terms of common processing of acoustic features in music and speech and of positive transfer of training from music to the more abstract phonological representations of speech units (syllables).


Sign in / Sign up

Export Citation Format

Share Document