scholarly journals Parallel streams define the temporal dynamics of speech processing across human auditory cortex

2016 ◽  
Author(s):  
Liberty S. Hamilton ◽  
Erik Edwards ◽  
Edward F. Chang

AbstractTo derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals, including phonetic and prosodic cues. Equally important is the detection of acoustic cues that give structure and context to the information we hear, such as sentence boundaries. How the brain organizes this information processing is unknown. Here, using data-driven computational methods on an extensive set of high-density intracranial recordings, we reveal a large-scale partitioning of the entire human speech cortex into two spatially distinct regions that detect important cues for parsing natural speech. These caudal (Zone 1) and rostral (Zone 2) regions work in parallel to detect onsets and prosodic information, respectively, within naturally spoken sentences. In contrast, local processing within each region supports phonetic feature encoding. These findings demonstrate a fundamental organizational property of the human auditory cortex that has been previously unrecognized.

2020 ◽  
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

AbstractWe present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.


Author(s):  
Panagiota Theodoni ◽  
Piotr Majka ◽  
David H. Reser ◽  
Daniel K. Wójcik ◽  
Marcello G.P. Rosa ◽  
...  

AbstractThe marmoset monkey has become an important primate model in Neuroscience. Here we characterize salient statistical properties of inter-areal connections of the marmoset cerebral cortex, using data from retrograde tracer injections. We found that the connectivity weights are highly heterogeneous, spanning five orders of magnitude, and are log-normally distributed. The cortico-cortical network is dense, heterogeneous and has high specificity. The reciprocal connections are the most prominent and the probability of connection between two areas decays with their functional dissimilarity. The laminar dependence of connections defines a hierarchical network correlated with microstructural properties of each area. The marmoset connectome reveals parallel streams associated with different sensory systems. Finally, the connectome is spatially embedded with a characteristic length that obeys a power law as a function of brain volume across species. These findings provide a connectomic basis for investigations of multiple interacting areas in a complex large-scale cortical system underlying cognitive processes.


2016 ◽  
Vol 45 ◽  
pp. 10-22 ◽  
Author(s):  
Björn Herrmann ◽  
Molly J. Henry ◽  
Ingrid S. Johnsrude ◽  
Jonas Obleser

2007 ◽  
Vol 18 (6) ◽  
pp. 1350-1360 ◽  
Author(s):  
C. F. Altmann ◽  
H. Nakata ◽  
Y. Noguchi ◽  
K. Inui ◽  
M. Hoshiyama ◽  
...  

Author(s):  
S. Bhattacharya ◽  
C. Braun ◽  
U. Leopold

Abstract. In this paper, we address the curse of dimensionality and scalability issues while managing vast volumes of multidimensional raster data in the renewable energy modeling process in an appropriate spatial and temporal context. Tensor representation provides a convenient way to capture inter-dependencies along multiple dimensions. In this direction, we propose a sophisticated way of handling large-scale multi-layered spatio-temporal data, adopted for raster-based geographic information systems (GIS). We chose Tensorflow, an open source software library developed by Google using data flow graphs, and the tensor data structure. We provide a comprehensive performance evaluation of the proposed model against r.sun in GRASS GIS. Benchmarking shows that the tensor-based approach outperforms by up to 60%, concerning overall execution time for high-resolution datasets and fine-grained time intervals for daily sums of solar irradiation [Wh.m-2.day-1].


Author(s):  
Sam V Norman-Haignere ◽  
Laura K. Long ◽  
Orrin Devinsky ◽  
Werner Doyle ◽  
Ifeoma Irobunda ◽  
...  

AbstractTo derive meaning from sound, the brain must integrate information across tens (e.g. phonemes) to hundreds (e.g. words) of milliseconds, but the neural computations that enable multiscale integration remain unclear. Prior evidence suggests that human auditory cortex analyzes sound using both generic acoustic features (e.g. spectrotemporal modulation) and category-specific computations, but how these putatively distinct computations integrate temporal information is unknown. To answer this question, we developed a novel method to estimate neural integration periods and applied the method to intracranial recordings from human epilepsy patients. We show that integration periods increase three-fold as one ascends the auditory cortical hierarchy. Moreover, we find that electrodes with short integration periods (~50-150 ms) respond selectively to spectrotemporal modulations, while electrodes with long integration periods (~200-300 ms) show prominent selectivity for sound categories such as speech and music. These findings reveal how multiscale temporal analysis organizes hierarchical computation in human auditory cortex.


2018 ◽  
Author(s):  
Pierre Mégevand ◽  
Manuel R. Mercier ◽  
David M. Groppe ◽  
Elana Zion Golumbic ◽  
Nima Mesgarani ◽  
...  

ABSTRACTNatural conversation is multisensory: when we can see the speaker’s face, visual speech cues influence our perception of what is being said. The neuronal basis of this phenomenon remains unclear, though there is indication that phase modulation of neuronal oscillations—ongoing excitability fluctuations of neuronal populations in the brain—provides a mechanistic contribution. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans, we show that neuronal populations in auditory cortex track the temporal dynamics of unisensory visual speech using the phase of their slow oscillations and phase-related modulations in high-frequency activity. Auditory cortex thus builds a representation of the speech stream’s envelope based on visual speech alone, at least in part by resetting the phase of its ongoing oscillations. Phase reset could amplify the representation of the speech stream and organize the information contained in neuronal activity patterns.SIGNIFICANCE STATEMENTWatching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied those mechanisms by recording the human brain’s electrical activity through electrodes implanted surgically inside the skull. We found that some regions of cerebral cortex that process auditory speech also respond to visual speech even when it is shown as a silent movie without a soundtrack. This response can occur through a reset of the phase of ongoing oscillations, which helps augment the response of auditory cortex to audiovisual speech. Our results contribute to discover the mechanisms by which the brain merges auditory and visual speech into a unitary perception.


Sign in / Sign up

Export Citation Format

Share Document