Acoustic Effects of Wearing an M40 Mask with Hood on Speech Perception Anf Roduction

Author(s):  
Sehchang Hah

The objective of this experiment was to quantify and localize the effects of wearing the nuclear, biological, and chemical (NBC) M40 protective mask and hood on speech production and perception. A designated speaker's vocalizations of 192 monosyllables while wearing an M40 mask with hood were digitized and used as speech stimuli. Another set of speech stimuli was produced by recording the same individual's vocalizing the same monosyllables without the mask and hood. Participants listened to one set of stimuli during two sessions, one session while wearing an M40 mask with hood and another session without the mask and hood. The results showed that wearing the mask with hood gave most detrimental effects on the sustention dimension acoustically for both speech perception and production. The results also showed that wearing it was detrimental on vocalizing and listening to fricatives and unvoiced-stops. These results may be due to the muffling effect of the voicemitter in speech production and the filtering effects of the voicemitter and the hood material on high frequency components during both speech production and perception. This information will be useful for designing better masks and hoods. This methodology also can be used to evaluate other speech communication systems.

Author(s):  
Rajinder Koul ◽  
James Dembowski

The purpose of this chapter is to review research conducted over the past two decades on the perception of synthetic speech by individuals with intellectual, language, and hearing impairments. Many individuals with little or no functional speech as a result of intellectual, language, physical, or multiple disabilities rely on non-speech communication systems to augment or replace natural speech. These systems include Speech Generating Devices (SGDs) that produce synthetic speech upon activation. Based on this review, the two main conclusions are evident. The first is that persons with intellectual and/or language impairment demonstrate greater difficulties in processing synthetic speech than their typical matched peers. The second conclusion is that repeated exposure to synthetic speech allows individuals with intellectual and/or language disabilities to identify synthetic speech with increased accuracy and speed. This finding is of clinical significance as it indicates that individuals who use SGDs become more proficient at understanding synthetic speech over a period of time.


Author(s):  
Lisa Verbeek ◽  
Constance Vissers ◽  
Mirjam Blumenthal ◽  
Ludo Verhoeven

Purpose: This study investigated the roles of cross-language transfer of first language (L1) and attentional control in second-language (L2) speech perception and production of sequential bilinguals, taking phonological overlap into account. Method: Twenty-five monolingual Dutch-speaking and 25 sequential bilingual Turkish–Dutch-speaking 3- and 4-year-olds were tested using picture identification tasks for speech perception in L1 Turkish and L2 Dutch, single-word tasks for speech production in L1 and L2, and a visual search task for attentional control. Phonological overlap was manipulated by dividing the speech tasks into subsets of phonemes that were either shared or unshared between languages. Results: In Dutch speech perception and production, monolingual children obtained higher accuracies than bilingual peers. Bilinguals showed equal performance in L1 and L2 perception but scored higher on L1 than on L2 production. For speech perception of shared phonemes, linear regression analyses revealed no direct effects of attention and L1 on L2. For speech production of shared phonemes, attention and L1 directly affected L2. When exploring unshared phonemes, direct effects of attentional control on L2 were demonstrated not only for speech production but also for speech perception. Conclusions: The roles of attentional control and cross-language transfer on L2 speech are different for shared and unshared phonemes. Whereas L2 speech production of shared phonemes is also supported by cross-language transfer of L1, L2 speech perception and production of unshared phonemes benefit from attentional control only. This underscores the clinical importance of considering phonological overlap and supporting attentional control when assisting young sequential bilinguals' L2 development.


2011 ◽  
pp. 1554-1565
Author(s):  
Rajinder Koul ◽  
James Dembowski

The purpose of this chapter is to review research conducted over the past two decades on the perception of synthetic speech by individuals with intellectual, language, and hearing impairments. Many individuals with little or no functional speech as a result of intellectual, language, physical, or multiple disabilities rely on non-speech communication systems to augment or replace natural speech. These systems include Speech Generating Devices (SGDs) that produce synthetic speech upon activation. Based on this review, the two main conclusions are evident. The first is that persons with intellectual and/or language impairment demonstrate greater difficulties in processing synthetic speech than their typical matched peers. The second conclusion is that repeated exposure to synthetic speech allows individuals with intellectual and/or language disabilities to identify synthetic speech with increased accuracy and speed. This finding is of clinical significance as it indicates that individuals who use SGDs become more proficient at understanding synthetic speech over a period of time.


2018 ◽  
Vol 61 (11) ◽  
pp. 2814-2826 ◽  
Author(s):  
Andrea L. Pittman ◽  
Ayoub Daliri ◽  
Lauren Meadows

PurposeThe purpose of this study was to determine if an objective measure of speech production could serve as a vocal biomarker for the effects of high-frequency hearing loss on speech perception. It was hypothesized that production of voiceless sibilants is governed sufficiently by auditory feedback that high-frequency hearing loss results in subtle but significant shifts in the spectral characteristics of these sibilants.MethodSibilant production was examined in individuals with mild to moderately severe congenital (22 children; 8–17 years old) and acquired (23 adults; 55–80 years old) hearing losses. Measures of hearing level (pure-tone average thresholds at 4 and 8 kHz), speech perception (detection of nonsense words within sentences), and speech production (spectral center of gravity [COG] for /s/ and /ʃ/) were obtained in unaided and aided conditions.ResultsFor both children and adults, detection of nonsense words increased significantly as hearing thresholds improved. Spectral COG for /ʃ/ was unaffected by hearing loss in both listening conditions, whereas the spectral COG for /s/ significantly decreased as high-frequency hearing loss increased. The distance in spectral COG between /s/ and /ʃ/ decreased significantly with increasing hearing level. COG distance significantly predicted nonsense-word detection in children but not in adults.ConclusionsAt least one aspect of speech production (voiceless sibilants) is measurably affected by high-frequency hearing loss and is related to speech perception in children. Speech production did not predict speech perception in adults, suggesting a more complex relationship between auditory feedback and feedforward mechanisms with age. Even so, these results suggest that this vocal biomarker may be useful for identifying the presence of high-frequency hearing loss in adults and children and for predicting the impact of hearing loss in children.


Author(s):  
G. Y. Fan ◽  
J. M. Cowley

It is well known that the structure information on the specimen is not always faithfully transferred through the electron microscope. Firstly, the spatial frequency spectrum is modulated by the transfer function (TF) at the focal plane. Secondly, the spectrum suffers high frequency cut-off by the aperture (or effectively damping terms such as chromatic aberration). While these do not have essential effect on imaging crystal periodicity as long as the low order Bragg spots are inside the aperture, although the contrast may be reversed, they may change the appearance of images of amorphous materials completely. Because the spectrum of amorphous materials is continuous, modulation of it emphasizes some components while weakening others. Especially the cut-off of high frequency components, which contribute to amorphous image just as strongly as low frequency components can have a fundamental effect. This can be illustrated through computer simulation. Imaging of a whitenoise object with an electron microscope without TF limitation gives Fig. 1a, which is obtained by Fourier transformation of a constant amplitude combined with random phases generated by computer.


1965 ◽  
Author(s):  
Carl E. Williams ◽  
Michael H. L. Hecker ◽  
Karl D. Kryter

2019 ◽  
Vol 45 (7) ◽  
pp. 1252-1270
Author(s):  
Wouter P. J. Broos ◽  
Aster Dijkgraaf ◽  
Eva Van Assche ◽  
Heleen Vander Beken ◽  
Nicolas Dirix ◽  
...  

2019 ◽  
Author(s):  
Lílian Rodrigues de Almeida ◽  
Paul A. Pope ◽  
Peter Hansen

In our previous studies we supported the claim that the motor theory is modulated by task load. Motoric participation in phonological processing increases from speech perception to speech production, with the endpoints of the dorsal stream having changing and complementary weightings for processing: the left inferior frontal gyrus (LIFG) being increasingly relevant and the left superior temporal gyrus (LSTG) being decreasingly relevant. Our previous results for neurostimulation of the LIFG support this model. In this study we investigated whether our claim that the motor theory is modulated by task load holds in (frontal) aphasia. Person(s) with aphasia (PWA) after stroke typically have damage on brain areas responsible for phonological processing. They may present variable patterns of recovery and, consequently, variable strategies of phonological processing. Here these strategies were investigated in two PWA with simultaneous fMRI and tDCS of the LIFG during speech perception and speech production tasks. Anodal tDCS excitation and cathodal tDCS inhibition should increase with the relevance of the target for the task. Cathodal tDCS over a target of low relevance could also induce compensation by the remaining nodes. Responses of PWA to tDCS would further depend on their pattern of recovery. Responses would depend on the responsiveness of the perilesional area, and could be weaker than in controls due to an overall hypoactivation of the cortex. Results suggest that the analysis of motor codes for articulation during phonological processing remains in frontal aphasia and that tDCS is a promising diagnostic tool to investigate the individual processing strategies.


Sign in / Sign up

Export Citation Format

Share Document