Robust speech processing in human auditory cortex

2018 ◽  
Vol 143 (3) ◽  
pp. 1744-1744
Author(s):  
Nima Mesgarani
2016 ◽  
Author(s):  
Liberty S. Hamilton ◽  
Erik Edwards ◽  
Edward F. Chang

AbstractTo derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals, including phonetic and prosodic cues. Equally important is the detection of acoustic cues that give structure and context to the information we hear, such as sentence boundaries. How the brain organizes this information processing is unknown. Here, using data-driven computational methods on an extensive set of high-density intracranial recordings, we reveal a large-scale partitioning of the entire human speech cortex into two spatially distinct regions that detect important cues for parsing natural speech. These caudal (Zone 1) and rostral (Zone 2) regions work in parallel to detect onsets and prosodic information, respectively, within naturally spoken sentences. In contrast, local processing within each region supports phonetic feature encoding. These findings demonstrate a fundamental organizational property of the human auditory cortex that has been previously unrecognized.


2003 ◽  
Vol 18 (2) ◽  
pp. 432-440 ◽  
Author(s):  
Takako Fujioka ◽  
Bernhard Ross ◽  
Hidehiko Okamoto ◽  
Yasuyuki Takeshima ◽  
Ryusuke Kakigi ◽  
...  

2015 ◽  
Vol 28 (3) ◽  
pp. 160-180 ◽  
Author(s):  
Oren Poliva ◽  
Patricia E.G. Bestelmeyer ◽  
Michelle Hall ◽  
Janet H. Bultitude ◽  
Kristin Koller ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Taishi Hosaka ◽  
Marino Kimura ◽  
Yuko Yotsumoto

AbstractWe have a keen sensitivity when it comes to the perception of our own voices. We can detect not only the differences between ourselves and others, but also slight modifications of our own voices. Here, we examined the neural correlates underlying such sensitive perception of one’s own voice. In the experiments, we modified the subjects’ own voices by using five types of filters. The subjects rated the similarity of the presented voices to their own. We compared BOLD (Blood Oxygen Level Dependent) signals between the voices that subjects rated as least similar to their own voice and those they rated as most similar. The contrast revealed that the bilateral superior temporal gyrus exhibited greater activities while listening to the voice least similar to their own voice and lesser activation while listening to the voice most similar to their own. Our results suggest that the superior temporal gyrus is involved in neural sharpening for the own-voice. The lesser degree of activations observed by the voices that were similar to the own-voice indicates that these areas not only respond to the differences between self and others, but also respond to the finer details of own-voices.


Sign in / Sign up

Export Citation Format

Share Document