scholarly journals Auditory evoked potentials to changes in speech sound duration in anesthetized mice

2018 ◽  
Author(s):  
A. Lipponen ◽  
J.L.O. Kurkela ◽  
Kyläheiko I. ◽  
Hölttä S. ◽  
T. Ruusuvirta ◽  
...  

AbstractElectrophysiological response termed mismatch negativity (MMN) indexes auditory change detection in humans. An analogous response, called the mismatch response (MMR), is also elicited in animals. Mismatch response has been widely utilized in investigations of change detection in human speech sounds in rats and guinea pigs, but not in mice. Since e.g. transgenic mouse models provide important advantages for further studies, we studied processing of speech sounds in anesthetized mice. Auditory evoked potentials were recorded from the dura above the auditory cortex to changes in duration of a human speech sound /a/. In oddball stimulus condition, the MMR was elicited at 53-259 ms latency in response to the changes. The MMR was found to the large (from 200 ms to 110 ms) but not to smaller (from 200 ms to 120-180 ms) changes in duration. The results suggest that mice can represent human speech sounds in order to detect changes in their duration. The findings can be utilized in future investigations applying mouse models for speech perception.

2006 ◽  
Vol 17 (08) ◽  
pp. 559-572 ◽  
Author(s):  
Katrina Agung ◽  
Suzanne C. Purdy ◽  
Catherine M. McMahon ◽  
Philip Newall

There has been considerable recent interest in the use of cortical auditory evoked potentials (CAEPs) as an electrophysiological measure of human speech encoding in individuals with normal as well as impaired auditory systems. The development of such electrophysiological measures such as CAEPs is important because they can be used to evaluate the benefits of hearing aids and cochlear implants in infants, young children, and adults that cannot cooperate for behavioral speech discrimination testing. The current study determined whether CAEPs produced by seven different speech sounds, which together cover a broad range of frequencies across the speech spectrum, could be differentiated from each other based on response latency and amplitude measures. CAEPs were recorded from ten adults with normal hearing in response to speech stimuli presented at a conversational level (65 dB SPL) via a loudspeaker. Cortical responses were reliably elicited by each of the speech sounds in all participants. CAEPs produced by speech sounds dominated by high-frequency energy were significantly different in amplitude from CAEPs produced by sounds dominated by lower-frequency energy. Significant effects of stimulus duration were also observed, with shorter duration stimuli producing larger amplitudes and earlier latencies than longer duration stimuli. This research demonstrates that CAEPs can be reliably evoked by sounds that encompass the entire speech frequency range. Further, CAEP latencies and amplitudes may provide an objective indication that spectrally different speech sounds are encoded differently at the cortical level.


2013 ◽  
Vol 60 (1) ◽  
Author(s):  
Aseel Almeqbel

Objective: Cortical auditory-evoked potentials (CAEPs), an objective measure of human speech encoding in individuals with normal or impaired auditory systems, can be used to assess the outcomes of hearing aids and cochlear implants in infants, or in young children who cannot co-operate for behavioural speech discrimination testing. The current study aimed to determine whether naturally produced speech stimuli /m/, /g/ and /t/ evoke distinct CAEP response patterns that can be reliably recorded and differentiated, based on their spectral information and whether the CAEP could be an electrophysiological measure to differentiate between these speech sounds.Method: CAEPs were recorded from 18 school-aged children with normal hearing, tested in two groups: younger (5 - 7 years) and older children (8 - 12 years). Cortical responses differed in their P1 and N2 latencies and amplitudes in response to /m/, /g/ and /t/ sounds (from low-, mid- and high-frequency regions, respectively). The largest amplitude of the P1 and N2 component was for /g/ and the smallest was for /t/. The P1 latency in both age groups did not show any significant difference between these speech sounds. The N2 latency showed a significant change in the younger group but not in the older group. The N2 latency of the speech sound /g/ was always noted earlier in both groups.Conclusion: This study demonstrates that spectrally different speech sounds are encoded differentially at the cortical level, and evoke distinct CAEP response patterns. CAEP latencies and amplitudes may provide an objective indication that spectrally different speech sounds are encoded differently at the cortical level.


2019 ◽  
Vol 287 ◽  
pp. 1-9 ◽  
Author(s):  
Derek J. Fisher ◽  
Erica D. Rudolph ◽  
Emma M.L. Ells ◽  
Verner J. Knott ◽  
Alain Labelle ◽  
...  

2021 ◽  
Author(s):  
Carolin Juechter ◽  
Rainer Beutelmann ◽  
Georg M. Klump

The present study establishes the Mongolian gerbil (Meriones unguiculatus) as a model for investigating the perception of human speech sounds. We report data on the discrimination of logatomes (CVCs - consonant-vowel-consonant combinations with outer consonants /b/, /d/, /s/ and /t/ and central vowels /a/, /aː/, /ɛ/, /eː/, /ɪ/, /iː/, /ɔ/, /oː/, /ʊ/ and /uː/, VCVs - vowel-consonant-vowel combinations with outer vowels /a/, /ɪ/ and /ʊ/ and central consonants /b/, /d/, /f/, /g/, /k/, /l/, /m/, /n/, /p/, /s/, /t/ and /v/) by young gerbils. Four young gerbils were trained to perform an oddball target detection paradigm in which they were required to discriminate a deviant CVC or VCV in a sequence of CVC or VCV standards, respectively. The experiments were performed with an ICRA-1 noise masker with speech-like spectral properties, and logatomes of multiple speakers were presented at various signal-to-noise ratios. Response latencies were measured to generate perceptual maps employing multidimensional scaling, which visualize the gerbils' internal representations of the sounds. The dimensions of the perceptual maps were correlated to multiple phonetic features of the speech sounds for evaluating which features of vowels and consonants are most important for the discrimination. The perceptual representation of vowels and consonants in gerbils was similar to that of humans, although gerbils needed higher signal-to-noise ratios for the discrimination of speech sounds than humans. The gerbils' discrimination of vowels depended on differences in the frequencies of the first and second formant determined by tongue height and position. Consonants were discriminated based on differences in combinations of their articulatory features. The similarities in the perception of logatomes by gerbils and humans renders the gerbil a suitable model for human speech sound discrimination.


2019 ◽  
Vol 373 ◽  
pp. 103-112 ◽  
Author(s):  
Samantha J. Gustafson ◽  
Curtis J. Billings ◽  
Benjamin W.Y. Hornsby ◽  
Alexandra P. Key

2013 ◽  
Vol 24 (09) ◽  
pp. 807-822 ◽  
Author(s):  
Lyndal Carter ◽  
Harvey Dillon ◽  
John Seymour ◽  
Mark Seeto ◽  
Bram Van Dun

Background: Previous studies have demonstrated that cortical auditory-evoked potentials (CAEPs) can be reliably elicited in response to speech stimuli in listeners wearing hearing aids. It is unclear, however, how close to the aided behavioral threshold (i.e., at what behavioral sensation level) a sound must be before a cortical response can reliably be detected. Purpose: The purpose of this study was to systematically examine the relationship between CAEP detection and the audibility of speech sounds (as measured behaviorally), when the listener is wearing a hearing aid fitted to prescriptive targets. A secondary aim was to investigate whether CAEP detection is affected by varying the frequency emphasis of stimuli, so as to simulate variations to the prescribed gain-frequency response of a hearing aid. The results have direct implications for the evaluation of hearing aid fittings in nonresponsive adult clients, and indirect implications for the evaluation of hearing aid fittings in infants. Research Design: Participants wore hearing aids while listening to speech sounds presented in a sound field. Aided thresholds were measured, and cortical responses evoked, under a range of stimulus conditions. The presence or absence of CAEPs was determined by an automated statistic. Study Sample: Participants were adults (6 females and 4 males). Participants had sensorineural hearing loss ranging from mild to severe-profound in degree. Data Collection and Analysis: Participants' own hearing aids were replaced with a test hearing aid, with linear processing, during assessments. Pure-tone thresholds and hearing aid gain measurements were obtained, and a theoretical prediction of speech stimulus audibility for each participant (similar to those used for audibility predictions in infant hearing aid fittings) was calculated. Three speech stimuli, (/m/, /t/, and /g/) were presented aided (monaurally, nontest ear occluded), free field, under three conditions (+4 dB/octave, −4 dB/octave, and without filtering), at levels of 40, 50, and 60 dB SPL (measured for the unfiltered condition). Behavioral thresholds were obtained, and CAEP recordings were made using these stimuli. The interaction of hearing loss, presentation levels, and filtering conditions resulted in a range of CAEP test behavioral sensation levels (SLs), from −25 to +40 dB. Results: Statistically significant CAEPs (p < .05) were obtained for virtually every presentation where the behavioral sensation level was >10 dB, and for only 5% of occasions when the sensation level was negative. In these (“false-positive”) cases, the greatest (negative) sensation level at which a CAEP was judged to be present was −6 dB SL. Conclusions: CAEPs are a sensitive tool for directly evaluating the audibility of speech sounds, at least for adult listeners. CAEP evaluation was found to be more accurate than audibility predictions, based on threshold and hearing aid response measures.


2019 ◽  
Vol 50 (2) ◽  
pp. 1911-1919 ◽  
Author(s):  
Arto Lipponen ◽  
Jari L. O. Kurkela ◽  
Iiris Kyläheiko ◽  
Sonja Hölttä ◽  
Timo Ruusuvirta ◽  
...  

2019 ◽  
Vol 23 ◽  
pp. 233121651988556
Author(s):  
Michael A. Stone ◽  
Anisa Visram ◽  
James M. Harte ◽  
Kevin J. Munro

Short-duration speech-like stimuli, for example, excised from running speech, can be used in the clinical setting to assess the integrity of the human auditory pathway at the level of the cortex. Modeling of the cochlear response to these stimuli demonstrated an imprecision in the location of the spectrotemporal energy, giving rise to uncertainty as to what and when of a stimulus caused any evoked electrophysiological response. This article reports the development and assessment of four short-duration, limited-bandwidth stimuli centered at low, mid, mid-high, and high frequencies, suitable for free-field delivery and, in addition, reproduction via hearing aids. The durations were determined by the British Society of Audiology recommended procedure for measuring Cortical Auditory-Evoked Potentials. The levels and bandwidths were chosen via a computational model to produce uniform cochlear excitation over a width exceeding that likely in a worst-case hearing-impaired listener. These parameters produce robustness against errors in insertion gains, and variation in frequency responses, due to transducer imperfections, room modes, and age-related variation in meatal resonances. The parameter choice predicts large spectral separation between adjacent stimuli on the cochlea. Analysis of the signals processed by examples of recent digital hearing aids mostly show similar levels of gain applied to each stimulus, independent of whether the stimulus was presented in isolation, bursts, continuous, or embedded in continuous speech. These stimuli seem to be suitable for measuring hearing-aided Cortical Auditory-Evoked Potentials and have the potential to be of benefit in the clinical setting.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Marianna Boros ◽  
Anna Gábor ◽  
Dóra Szabó ◽  
Anett Bozsik ◽  
Márta Gácsi ◽  
...  

AbstractIn the human speech signal, cues of speech sounds and voice identities are conflated, but they are processed separately in the human brain. The processing of speech sounds and voice identities is typically performed by non-primary auditory regions in humans and non-human primates. Additionally, these processes exhibit functional asymmetry in humans, indicating the involvement of distinct mechanisms. Behavioural studies indicate analogue side biases in dogs, but neural evidence for this functional dissociation is missing. In two experiments, using an fMRI adaptation paradigm, we presented awake dogs with natural human speech that either varied in segmental (change in speech sound) or suprasegmental (change in voice identity) content. In auditory regions, we found a repetition enhancement effect for voice identity processing in a secondary auditory region – the caudal ectosylvian gyrus. The same region did not show repetition effects for speech sounds, nor did the primary auditory cortex exhibit sensitivity to changes either in the segmental or in the suprasegmental content. Furthermore, we did not find evidence for functional asymmetry neither in the processing of speech sounds or voice identities. Our results in dogs corroborate former human and non-human primate evidence on the role of secondary auditory regions in the processing of suprasegmental cues, suggesting similar neural sensitivity to the identity of the vocalizer across the mammalian order.


Sign in / Sign up

Export Citation Format

Share Document