nonspeech sounds
Recently Published Documents


TOTAL DOCUMENTS

60
(FIVE YEARS 4)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Kelsey Mankel ◽  
Utsav Shrestha ◽  
Aaryani Tipirneni-Sajja ◽  
Gavin Bidelman

Categorizing sounds into meaningful groups helps listeners more efficiently process the auditory scene and is a foundational skill for speech perception and language development. Yet, how auditory categories develop in the brain through learning, particularly for nonspeech sounds, is not well understood. Here, we asked musically naïve listeners to complete a brief (~20 min) training session where they learned to identify sounds from a nonspeech continuum (minor-major 3rd musical intervals). We used multichannel EEG to track behaviorally relevant neuroplastic changes in the auditory event-related potentials (ERPs) pre- to post-training. To rule out mere exposure-induced changes, neural effects were evaluated against a control group of 14 nonmusicians who did not undergo training. We also compared individual categorization performance with structural volumetrics of bilateral primary auditory cortex (PAC) from MRI to evaluate neuroanatomical substrates of learning. Behavioral performance revealed steeper (i.e., more categorical) identification functions in the posttest that correlated with better training accuracy. At the neural level, improvement in learners' behavioral identification was characterized by smaller P2 amplitudes at posttest, particularly over right hemisphere. Critically, learning-related changes in the ERPs were not observed in control listeners, ruling out mere exposure effects. Learners also showed smaller and thinner PAC bilaterally, indicating superior categorization was associated with structural differences in primary auditory brain regions. Collectively, our data suggest successful auditory categorical learning of nonspeech sounds is characterized by short-term functional changes (i.e., greater post-training efficiency) in sensory coding processes superimposed on preexisting structural differences in bilateral auditory cortex.


2021 ◽  
Vol 25 ◽  
pp. 233121652110499
Author(s):  
Erin M. Picou ◽  
Lori Rakita ◽  
Gabrielle H. Buono ◽  
Travis M. Moore

Adults with hearing loss demonstrate a reduced range of emotional responses to nonspeech sounds compared to their peers with normal hearing. The purpose of this study was to evaluate two possible strategies for addressing the effects of hearing loss on emotional responses: (a) increasing overall level and (b) hearing aid use (with and without nonlinear frequency compression, NFC). Twenty-three adults (mean age  =  65.5 years) with mild-to-severe sensorineural hearing loss and 17 adults (mean age  =  56.2 years) with normal hearing participated. All adults provided ratings of valence and arousal without hearing aids in response to nonspeech sounds presented at a moderate and at a high level. Adults with hearing loss also provided ratings while using individually fitted study hearing aids with two settings (NFC-OFF or NFC-ON). Hearing loss and hearing aid use impacted ratings of valence but not arousal. Listeners with hearing loss rated pleasant sounds as less pleasant than their peers, confirming findings in the extant literature. For both groups, increasing the overall level resulted in lower ratings of valence. For listeners with hearing loss, the use of hearing aids (NFC-OFF) also resulted in lower ratings of valence but to a lesser extent than increasing the overall level. Activating NFC resulted in ratings that were similar to ratings without hearing aids (with a moderate presentation level) but did not improve ratings to match those from the listeners with normal hearing. These findings suggest that current interventions do not ameliorate the effects of hearing loss on emotional responses to sound.


2020 ◽  
pp. 1-11
Author(s):  
Katharine GRAF ESTES ◽  
Dylan M. ANTOVICH ◽  
Erica L. VERDE

Abstract This research investigates selectivity in word learning for bilingual infants. Previous work demonstrated that bilingual infants show greater openness to non-native language sounds in object labels than monolinguals (Hay et al., 2015; Singh, 2018). It remains unclear whether bilingual openness extends to nonspeech sounds. We presented 14- and 19-month-old bilinguals with object labels consisting of nonspeech tones. Monolinguals recently displayed learning of the same labels at 14 months, but not 19 months (Graf Estes et al., 2018). In contrast, bilinguals failed to learn the labels. We propose that hearing phonological variation across two languages helps bilinguals reject nonspeech word forms.


Author(s):  
D. H. Whalen

The Motor Theory of Speech Perception is a proposed explanation of the fundamental relationship between the way speech is produced and the way it is perceived. Associated primarily with the work of Liberman and colleagues, it posited the active participation of the motor system in the perception of speech. Early versions of the theory contained elements that later proved untenable, such as the expectation that the neural commands to the muscles (as seen in electromyography) would be more invariant than the acoustics. Support drawn from categorical perception (in which discrimination is quite poor within linguistic categories but excellent across boundaries) was called into question by studies showing means of improving within-category discrimination and finding similar results for nonspeech sounds and for animals perceiving speech. Evidence for motor involvement in perceptual processes nonetheless continued to accrue, and related motor theories have been proposed. Neurological and neuroimaging results have yielded a great deal of evidence consistent with variants of the theory, but they highlight the issue that there is no single “motor system,” and so different components appear in different contexts. Assigning the appropriate amount of effort to the various systems that interact to result in the perception of speech is an ongoing process, but it is clear that some of the systems will reflect the motor control of speech.


2018 ◽  
Vol 1 ◽  
pp. 205920431773199 ◽  
Author(s):  
Rhimmon Simchy-Gross ◽  
Elizabeth Hellmuth Margulis

The speech-to-song illusion tracks a perceptual transformation across repetitions where a stimulus that originally sounded like speech comes to sound like song. This article examines whether the illusion also generalizes to other kinds of nonspeech sounds. Participants heard each of 20 environmental sound clips repeated in either original or jumbled form. They rated the musicality of the clips on a 5-point scale where 1 represented sounds exactly like environmental sound and 5 sounds exactly like music. Average ratings increased significantly across repetitions, suggesting that the speech-to-song illusion is one form of a more general sound-to-music illusion produced by repetition. This illusion occurred regardless of whether the clips were repeated in original or jumbled form, marking a difference compared to speech, for which the illusion only occurred if the repetitions were exact.


2015 ◽  
Vol 58 (1) ◽  
pp. 107-121 ◽  
Author(s):  
Corinna A. Christmann ◽  
Thomas Lachmann ◽  
Claudia Steinbrink

Purpose It is unknown whether phonological deficits are the primary cause of developmental dyslexia or whether they represent a secondary symptom resulting from impairments in processing basic acoustic parameters of speech. This might be due, in part, to methodological difficulties. Our aim was to overcome two of these difficulties: the comparability of stimulus material and task in speech versus nonspeech conditions. Method In this study, the authors (a) assessed auditory processing of German vowel center stimuli, spectrally rotated versions of these stimuli, and bands of formants; (b) used the same task for linguistic and nonlinguistic conditions; and (c) varied systematically temporal and spectral parameters inherent in the German vowel system. Forty-two adolescents and adults with and without reading disabilities participated. Results Group differences were found for all linguistic and nonlinguistic conditions for both temporal and spectral parameters. Auditory deficits were identified in most but not all participants with dyslexia. These deficits were not restricted to speech stimuli—they were also found for nonspeech stimuli with equal and lower complexity compared with the vowel stimuli. Temporal deficits were not observed in isolation. Conclusion These results support the existence of a general auditory processing impairment in developmental dyslexia.


2014 ◽  
Vol 62 (2) ◽  
pp. 188-194 ◽  
Author(s):  
Eugenia Costa-Giomi ◽  
Beatriz Ilari

Caregivers and early childhood teachers all over the world use singing and speech to elicit and maintain infants’ attention. Research comparing infants’ preferential attention to music and speech is inconclusive regarding their responses to these two types of auditory stimuli, with one study showing a music bias and another one indicating no differential attention. The purpose of this investigation was to study 11-month-old infants’ preferential attention to spoken and sung renditions of an unfamiliar folk song in a foreign language ( n = 24). The results of an infant-controlled preference procedure showed no significant differences in attention to the two types of stimuli. The findings challenge infants’ well-documented bias for speech over nonspeech sounds and provide evidence that music, even when performed by an untrained singer, can be as effective as speech in eliciting infants’ attention.


Author(s):  
Bruce N. Walker ◽  
Jeffrey Lindsay ◽  
Amanda Nance ◽  
Yoko Nakano ◽  
Dianne K. Palladino ◽  
...  

Objective: The goal of this project is to evaluate a new auditory cue, which the authors call spearcons, in comparison to other auditory cues with the aim of improving auditory menu navigation. Background: With the shrinking displays of mobile devices and increasing technology use by visually impaired users, it becomes important to improve usability of non-graphical user interface (GUI) interfaces such as auditory menus. Using nonspeech sounds called auditory icons (i.e., representative real sounds of objects or events) or earcons (i.e., brief musical melody patterns) has been proposed to enhance menu navigation. To compensate for the weaknesses of traditional nonspeech auditory cues, the authors developed spearcons by speeding up a spoken phrase, even to the point where it is no longer recognized as speech. Method: The authors conducted five empirical experiments. In Experiments 1 and 2, they measured menu navigation efficiency and accuracy among cues. In Experiments 3 and 4, they evaluated learning rate of cues and speech itself. In Experiment 5, they assessed spearcon enhancements compared to plain TTS (text to speech: speak out written menu items) in a two-dimensional auditory menu. Results: Spearcons outperformed traditional and newer hybrid auditory cues in navigation efficiency, accuracy, and learning rate. Moreover, spearcons showed comparable learnability as normal speech and led to better performance than speech-only auditory cues in two-dimensional menu navigation. Conclusion: These results show that spearcons can be more effective than previous auditory cues in menu-based interfaces. Application: Spearcons have broadened the taxonomy of nonspeech auditory cues. Users can benefit from the application of spearcons in real devices.


Sign in / Sign up

Export Citation Format

Share Document