Phonemic representation in human auditory cortex examined via intracranial recordings

2012 ◽  
Vol 85 (3) ◽  
pp. 340-341
Author(s):  
M. Steinschneider
2016 ◽  
Author(s):  
Liberty S. Hamilton ◽  
Erik Edwards ◽  
Edward F. Chang

AbstractTo derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals, including phonetic and prosodic cues. Equally important is the detection of acoustic cues that give structure and context to the information we hear, such as sentence boundaries. How the brain organizes this information processing is unknown. Here, using data-driven computational methods on an extensive set of high-density intracranial recordings, we reveal a large-scale partitioning of the entire human speech cortex into two spatially distinct regions that detect important cues for parsing natural speech. These caudal (Zone 1) and rostral (Zone 2) regions work in parallel to detect onsets and prosodic information, respectively, within naturally spoken sentences. In contrast, local processing within each region supports phonetic feature encoding. These findings demonstrate a fundamental organizational property of the human auditory cortex that has been previously unrecognized.


Author(s):  
Sam V Norman-Haignere ◽  
Laura K. Long ◽  
Orrin Devinsky ◽  
Werner Doyle ◽  
Ifeoma Irobunda ◽  
...  

AbstractTo derive meaning from sound, the brain must integrate information across tens (e.g. phonemes) to hundreds (e.g. words) of milliseconds, but the neural computations that enable multiscale integration remain unclear. Prior evidence suggests that human auditory cortex analyzes sound using both generic acoustic features (e.g. spectrotemporal modulation) and category-specific computations, but how these putatively distinct computations integrate temporal information is unknown. To answer this question, we developed a novel method to estimate neural integration periods and applied the method to intracranial recordings from human epilepsy patients. We show that integration periods increase three-fold as one ascends the auditory cortical hierarchy. Moreover, we find that electrodes with short integration periods (~50-150 ms) respond selectively to spectrotemporal modulations, while electrodes with long integration periods (~200-300 ms) show prominent selectivity for sound categories such as speech and music. These findings reveal how multiscale temporal analysis organizes hierarchical computation in human auditory cortex.


2008 ◽  
Vol 28 (52) ◽  
pp. 14301-14310 ◽  
Author(s):  
J. Besle ◽  
C. Fischer ◽  
A. Bidet-Caulet ◽  
F. Lecaignard ◽  
O. Bertrand ◽  
...  

2003 ◽  
Vol 18 (2) ◽  
pp. 432-440 ◽  
Author(s):  
Takako Fujioka ◽  
Bernhard Ross ◽  
Hidehiko Okamoto ◽  
Yasuyuki Takeshima ◽  
Ryusuke Kakigi ◽  
...  

2015 ◽  
Vol 28 (3) ◽  
pp. 160-180 ◽  
Author(s):  
Oren Poliva ◽  
Patricia E.G. Bestelmeyer ◽  
Michelle Hall ◽  
Janet H. Bultitude ◽  
Kristin Koller ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document