scholarly journals Linear superposition of responses evoked by individual glottal pulses explain over 80% of the frequency following response to human speech in the macaque monkey

2021 ◽  
Author(s):  
Tobias Teichert ◽  
G. Nike Gnanateja ◽  
Srivatsun Sadagopan ◽  
Bharath Chandrasekaran

AbstractThe frequency-following response (FFR) is a scalp-recorded electrophysiological potential that closely follows the periodicity of complex sounds such as speech. It has been suggested that FFRs reflect the linear superposition of responses that are triggered by the glottal pulse in each cycle of the fundamental frequency (F0 responses) and sequentially propagate through auditory processing stages in brainstem, midbrain, and cortex. However, this conceptualization of the FFR is debated, and it remains unclear if and how well a simple linear superposition can capture the spectro-temporal complexity of FFRs that are generated within the highly recurrent and non-linear auditory system. To address this question, we used a deconvolution approach to compute the hypothetical F0 responses that best explain the FFRs in rhesus monkeys to human speech and click trains with time-varying pitch patterns. The linear superposition of F0 responses explained well over 90% of the variance of click train steady state FFRs and well over 80% of mandarin tone steady state FFRs. The F0 responses could be measured with high signal-to-noise ratio and featured several spectro-temporally and topographically distinct components that likely reflect the activation of brainstem (<5ms; 200-1000 Hz), midbrain (5-15 ms; 100-250 Hz) and cortex (15-35 ms; ~90 Hz). In summary, our results in the monkey support the notion that FFRs arise as the superposition of F0 responses by showing for the first time that they can capture the bulk of the variance and spectro-temporal complexity of FFRs to human speech with time-varying pitch. These findings identify F0 responses as a potential diagnostic tool that may be useful to reliably link altered FFRs in speech and language disorders to altered F0 responses and thus to specific latencies, frequency bands and ultimately processing stages.

1988 ◽  
Vol 33 (12) ◽  
pp. 1103-1103
Author(s):  
No authorship indicated

1992 ◽  
Vol 336 (1278) ◽  
pp. 295-306 ◽  

The past 30 years has seen a remarkable development in our understanding of how the auditory system - particularly the peripheral system - processes complex sounds. Perhaps the most significant has been our understanding of the mechanisms underlying auditory frequency selectivity and their importance for normal and impaired auditory processing. Physiologically vulnerable cochlear filtering can account for many aspects of our normal and impaired psychophysical frequency selectivity with important consequences for the perception of complex sounds. For normal hearing, remarkable mechanisms in the organ of Corti, involving enhancement of mechanical tuning (in mammals probably by feedback of electro-mechanically generated energy from the hair cells), produce exquisite tuning, reflected in the tuning properties of cochlear nerve fibres. Recent comparisons of physiological (cochlear nerve) and psychophysical frequency selectivity in the same species indicate that the ear’s overall frequency selectivity can be accounted for by this cochlear filtering, at least in band width terms. Because this cochlear filtering is physiologically vulnerable, it deteriorates in deleterious conditions of the cochlea - hypoxia, disease, drugs, noise overexposure, mechanical disturbance - and is reflected in impaired psychophysical frequency selectivity. This is a fundamental feature of sensorineural hearing loss of cochlear origin, and is of diagnostic value. This cochlear filtering, particularly as reflected in the temporal patterns of cochlear fibres to complex sounds, is remarkably robust over a wide range of stimulus levels. Furthermore, cochlear filtering properties are a prime determinant of the ‘place’ and ‘time’ coding of frequency at the cochlear nerve level, both of which appear to be involved in pitch perception. The problem of how the place and time coding of complex sounds is effected over the ear’s remarkably wide dynamic range is briefly addressed. In the auditory brainstem, particularly the dorsal cochlear nucleus, are inhibitory mechanisms responsible for enhancing the spectral and temporal contrasts in complex sounds. These mechanisms are now being dissected neuropharmacologically. At the cortical level, mechanisms are evident that are capable of abstracting biologically relevant features of complex sounds. Fundamental studies of how the auditory system encodes and processes complex sounds are vital to promising recent applications in the diagnosis and rehabilitation of the hearing impaired.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fang Wang ◽  
Blair Kaneshiro ◽  
C. Benjamin Strauber ◽  
Lindsey Hasak ◽  
Quynh Trang H. Nguyen ◽  
...  

AbstractEEG has been central to investigations of the time course of various neural functions underpinning visual word recognition. Recently the steady-state visual evoked potential (SSVEP) paradigm has been increasingly adopted for word recognition studies due to its high signal-to-noise ratio. Such studies, however, have been typically framed around a single source in the left ventral occipitotemporal cortex (vOT). Here, we combine SSVEP recorded from 16 adult native English speakers with a data-driven spatial filtering approach—Reliable Components Analysis (RCA)—to elucidate distinct functional sources with overlapping yet separable time courses and topographies that emerge when contrasting words with pseudofont visual controls. The first component topography was maximal over left vOT regions with a shorter latency (approximately 180 ms). A second component was maximal over more dorsal parietal regions with a longer latency (approximately 260 ms). Both components consistently emerged across a range of parameter manipulations including changes in the spatial overlap between successive stimuli, and changes in both base and deviation frequency. We then contrasted word-in-nonword and word-in-pseudoword to test the hierarchical processing mechanisms underlying visual word recognition. Results suggest that these hierarchical contrasts fail to evoke a unitary component that might be reasonably associated with lexical access.


Author(s):  
Fengxia Wang

This paper discusses the stability of a periodically time-varying, spinning blade with cubic geometric nonlinearity. The modal reduction method is adopted to simplify the nonlinear partial differential equations to the ordinary differential equations, and the geometric stiffening is approximated by the axial inertia membrane force. The method of multiple time scale is employed to study the steady state motions, the corresponding stability and bifurcation for such a periodically time-varying rotating blade. The backbone curves for steady-state motions are achieved, and the parameter map for stability and bifurcation is developed. Illustration of the steady-state motions is presented for an understanding of rotational motions of the rotating blade.


2019 ◽  
Vol 30 (08) ◽  
pp. 672-676 ◽  
Author(s):  
Ping Lu ◽  
Yue Huang ◽  
Wen-Xia Chen ◽  
Wen Jiang ◽  
Ni-Yi Hua ◽  
...  

AbstractThe detection of precise hearing thresholds in infants and children with auditory neuropathy (AN) is challenging with current objective methods, especially in those younger than six months of age.The aim of this study was to compare the thresholds using auditory steady-state response (ASSR) and cochlear microphonics (CM) in children with AN and children with normal hearing.The thresholds of CM, ASSR, and visual reinforcement audiometry (VRA) tests were recorded; the ASSR and VRA frequencies used were 250, 500, 1000, 2000, and 4000 Hz.The participants in this study were 15 children with AN (27 ears) (1–7.6 years, median age 4.1 years) and ten children with normal hearing (20 ears) (1–8 years, median age four years).The thresholds of the three methods were compared, and histograms were used to represent frequency distributions of threshold differences obtained from the three methods.In children with normal hearing, the average CM thresholds (84.5 dB) were significantly higher than the VRA thresholds (10.0–10.8 dB); in children with AN, both CM and VRA responses were seen at high signal levels (88.9 dB and 70.6–103.4 dB, respectively). In normal children, the difference between mean VRA and ASSR thresholds ranged from 17.5 to 30.3 dB, which was significantly smaller than the difference seen between the mean CM and VRA thresholds (71.5–72.3 dB). The correlation between VRA and ASSR in children with normal hearing ranged from 0.38 to 0.48, whereas no such correlation was seen in children with AN at any frequency (0.03–0.19).Our results indicated that ASSR and CM were poor predictors of the conventional behavioral threshold in children with AN.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Emily B. J. Coffey ◽  
Trent Nicol ◽  
Travis White-Schwoch ◽  
Bharath Chandrasekaran ◽  
Jennifer Krizman ◽  
...  

Abstract The auditory frequency-following response (FFR) is a non-invasive index of the fidelity of sound encoding in the brain, and is used to study the integrity, plasticity, and behavioral relevance of the neural encoding of sound. In this Perspective, we review recent evidence suggesting that, in humans, the FFR arises from multiple cortical and subcortical sources, not just subcortically as previously believed, and we illustrate how the FFR to complex sounds can enhance the wider field of auditory neuroscience. Far from being of use only to study basic auditory processes, the FFR is an uncommonly multifaceted response yielding a wealth of information, with much yet to be tapped.


Sign in / Sign up

Export Citation Format

Share Document