scholarly journals Meter enhances the subcortical processing of speech sounds at a strong beat

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Il Joon Moon ◽  
Soojin Kang ◽  
Nelli Boichenko ◽  
Sung Hwa Hong ◽  
Kyung Myun Lee

Abstract The temporal structure of sound such as in music and speech increases the efficiency of auditory processing by providing listeners with a predictable context. Musical meter is a good example of a sound structure that is temporally organized in a hierarchical manner, with recent studies showing that meter optimizes neural processing, particularly for sounds located at a higher metrical position or strong beat. Whereas enhanced cortical auditory processing at times of high metric strength has been studied, there is to date no direct evidence showing metrical modulation of subcortical processing. In this work, we examined the effect of meter on the subcortical encoding of sounds by measuring human auditory frequency-following responses to speech presented at four different metrical positions. Results show that neural encoding of the fundamental frequency of the vowel was enhanced at the strong beat, and also that the neural consistency of the vowel was the highest at the strong beat. When comparing musicians to non-musicians, musicians were found, at the strong beat, to selectively enhance the behaviorally relevant component of the speech sound, namely the formant frequency of the transient part. Our findings indicate that the meter of sound influences subcortical processing, and this metrical modulation differs depending on musical expertise.

2021 ◽  
Vol 11 (1) ◽  
pp. 112-128
Author(s):  
Caitlin N. Price ◽  
Deborah Moncrieff

Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention’s role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain–behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.


2018 ◽  
Vol 35 (3) ◽  
pp. 315-331 ◽  
Author(s):  
Paula Virtala ◽  
Minna Huotilainen ◽  
Esa Lilja ◽  
Juha Ojala ◽  
Mari Tervaniemi

Guitar distortion used in rock music modifies a chord so that new frequencies appear in its harmonic structure. A distorted dyad (power chord) has a special role in heavy metal music due to its harmonics that create a major third interval, making it similar to a major chord. We investigated how distortion affects cortical auditory processing of chords in musicians and nonmusicians. Electric guitar chords with or without distortion and with or without the interval of the major third (i.e., triads or dyads) were presented in an oddball design where one of them served as a repeating standard stimulus and others served as occasional deviants. This enabled the recording of event-related potentials (ERPs) of the electroencephalogram (EEG) related to deviance processing (the mismatch negativity MMN and the attention-related P3a component) in an ignore condition. MMN and P3a responses were elicited in most paradigms. Distorted chords in a nondistorted context only elicited early P3a responses. However, the power chord did not demonstrate a special role in the level of the ERPs. Earlier and larger MMN and P3a responses were elicited when distortion was modified compared to when only harmony (triad vs. dyad) was modified between standards and deviants. The MMN responses were largest when distortion and harmony deviated simultaneously. Musicians demonstrated larger P3a responses than nonmusicians. The results suggest mostly independent cortical auditory processing of distortion and harmony in Western individuals, and facilitated chord change processing in musicians compared to nonmusicians. While distortion has been used in heavy rock music for decades, this study is among the first ones to shed light on its cortical basis.


2021 ◽  
Vol 64 (10) ◽  
pp. 4014-4029
Author(s):  
Kathy R. Vander Werff ◽  
Christopher E. Niemczak ◽  
Kenneth Morse

Purpose Background noise has been categorized as energetic masking due to spectrotemporal overlap of the target and masker on the auditory periphery or informational masking due to cognitive-level interference from relevant content such as speech. The effects of masking on cortical and sensory auditory processing can be objectively studied with the cortical auditory evoked potential (CAEP). However, whether effects on neural response morphology are due to energetic spectrotemporal differences or informational content is not fully understood. The current multi-experiment series was designed to assess the effects of speech versus nonspeech maskers on the neural encoding of speech information in the central auditory system, specifically in terms of the effects of speech babble noise maskers varying by talker number. Method CAEPs were recorded from normal-hearing young adults in response to speech syllables in the presence of energetic maskers (white or speech-shaped noise) and varying amounts of informational maskers (speech babble maskers). The primary manipulation of informational masking was the number of talkers in speech babble, and results on CAEPs were compared to those of nonspeech maskers with different temporal and spectral characteristics. Results Even when nonspeech noise maskers were spectrally shaped and temporally modulated to speech babble maskers, notable changes in the typical morphology of the CAEP in response to speech stimuli were identified in the presence of primarily energetic maskers and speech babble maskers with varying numbers of talkers. Conclusions While differences in CAEP outcomes did not reach significance by number of talkers, neural components were significantly affected by speech babble maskers compared to nonspeech maskers. These results suggest an informational masking influence on neural encoding of speech information at the sensory cortical level of auditory processing, even without active participation on the part of the listener.


2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


1998 ◽  
Vol 21 (2) ◽  
pp. 280-281
Author(s):  
Athanassios Protopapas ◽  
Paula Tallal

The arguments for the orderly output constraint concern phylogenetic matters and do not address the ontogeny of combination-specific neurons and the corresponding processing mechanisms. Locus equations are too variable to be strongly predetermined and too inconsistent to be easily learned. Findings on the development of speech perception and underlying auditory processing must be taken into account in the formulation of neural encoding theories.


2001 ◽  
Vol 107 (2) ◽  
pp. 117-123 ◽  
Author(s):  
Seppo Kähkönen ◽  
Jyrki Ahveninen ◽  
Eero Pekkonen ◽  
Seppo Kaakkola ◽  
Juha Huttunen ◽  
...  

2015 ◽  
Vol 26 (04) ◽  
pp. 423-435 ◽  
Author(s):  
Vasiliki Vivian Iliadou ◽  
Gail D. Chermak ◽  
Doris-Eva Bamiou

Background: According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. Purpose: To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child’s SSD. Research Design: Central auditory processing disorder clinic pediatric case reports. Study Sample: Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech–language pathologists as a result of slower than expected progress in therapy. Results: Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient’s speech sound (phonological) disorder. Conclusions: Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD.


2009 ◽  
Vol 82 (2) ◽  
pp. 176-185 ◽  
Author(s):  
Patrizia Silvia Bisiacchi ◽  
Giovanni Mento ◽  
Agnese Suppiej

Sign in / Sign up

Export Citation Format

Share Document