A research on acoustical comfort for hearing-impaired individuals in inclusive education places

2020 ◽  
pp. 1351010X2092358
Author(s):  
Zakariyya Uzeyirli ◽  
Aslı Özçevik Bilen

The inclusive education method has substantial contributions to hearing-impaired individuals’ education and socialization. However, the poor physical environment and acoustic comfort conditions negatively affect speech intelligibility at such places and therefore, the quality of education. Upon determining that there are very few subjective evaluation studies, we conducted a study regarding the impact of acoustic comfort conditions on speech intelligibility at inclusive education places. Within the scope of the study, first, a classroom was determined, and the current acoustic conditions of the class were evaluated objectively by field acoustic measurements. A calibrated model was created in the simulation software of the relevant class and then two more models with optimum reverberation time values of 0.4 s and 0.8 s as suggested in the literature, and auralizations were performed for the models. For subjective evaluation, a subject group of hearing-impaired and normal hearing individuals fulfilling equal conditions were tested by speech discrimination test in real-time in the classroom and from auralization recordings in a laboratory setting. Regarding the results obtained, it was observed that speech intelligibility percentage of normal hearing individuals increased as expected while in hearing-impaired individuals, contrary to the expectations, percentage differed from one another, and there was no increase. Following the discussions with experts, it was concluded that different hearing aids used by hearing-impaired individuals might lead to this situation. Accordingly, it occurs that the possibility to achieve a good speech intelligibility for hearing-impaired individuals even if optimum acoustic values suggested are fulfilled in education places remains unclear.

2021 ◽  
Author(s):  
Fatos Myftari

This thesis is concerned with noise reduction in hearing aids. Hearing - impaired listeners and hearing - impaired users have great difficulty understanding speech in a noisy background. This problem has motivated the development and the use of noise reduction algorithms to improve the speech intelligibility in hearing aids. In this thesis, two noise reduction algorithms for single channel hearing instruments are presented, evaluated using objective and subjective tests. The first noise reduction algorithm, conventional Spectral Subtraction, is simulated using MATLAB 6.5, R13. The second noise reduction algorithm, Spectral Subtraction in wavelet domanin is introduced as well. This algorithm is implemented off line, and is compared with conventional Spectral Subtraction. A subjective evaluation demonstrates that the second algorithm has additional advantages in speech intelligibility, in poor listening conditions relative to conventional Spectral Subtraction. The subjective testing was performed with normal hearing listeners, at Ryerson University. The objective evaluation shows that the Spectral Subtraction in wavelet domain has improved Signal to Noise Ratio compared to conventional Spectral Subtraction.


2010 ◽  
Vol 21 (08) ◽  
pp. 493-511
Author(s):  
Amanda J. Ortmann ◽  
Catherine V. Palmer ◽  
Sheila R. Pratt

Background: A possible voicing cue used to differentiate voiced and voiceless cognate pairs is envelope onset asynchrony (EOA). EOA is the time between the onsets of two frequency bands of energy (in this study one band was high-pass filtered at 3000 Hz, the other low-pass filtered at 350 Hz). This study assessed the perceptual impact of manipulating EOA on voicing perception of initial stop consonants, and whether normal-hearing and hearing-impaired listeners were sensitive to changes in EOA as a cue for voicing. Purpose: The purpose of this study was to examine the effect of spectrally asynchronous auditory delay on the perception of voicing associated with initial stop consonants by normal-hearing and hearing-impaired listeners. Research Design: Prospective experimental study comparing the perceptual differences of manipulating the EOA cues for two groups of listeners. Study Sample: Thirty adults between the ages of 21 and 60 yr completed the study: 17 listeners with normal hearing and 13 listeners with mild-moderate sensorineural hearing loss. Data Collection and Analysis: The participants listened to voiced and voiceless stop consonants within a consonant-vowel syllable structure. The EOA of each syllable was varied along a continuum, and identification and discrimination tasks were used to determine if the EOA manipulation resulted in categorical shifts in voicing perception. In the identification task the participants identified the consonants as belonging to one of two categories (voiced or voiceless cognate). They also completed a same-different discrimination task with the same set of stimuli. Categorical perception was confirmed with a d-prime sensitivity measure by examining how accurately the results from the identification task predicted the discrimination results. The influence of EOA manipulations on the perception of voicing was determined from shifts in the identification functions and discrimination peaks along the EOA continuum. The two participant groups were compared in order to determine the impact of EOA on voicing perception as a function of syllable and hearing status. Results: Both groups of listeners demonstrated a categorical shift in voicing perception with manipulation of EOA for some of the syllables used in this study. That is, as the temporal onset asynchrony between low- and high-frequency bands of speech was manipulated, the listeners' perception of consonant voicing changed between voiced and voiceless categories. No significant differences were found between listeners with normal hearing and listeners with hearing loss as a result of the EOA manipulation. Conclusions: The results of this study suggested that both normal-hearing and hearing-impaired listeners likely use spectrally asynchronous delays found in natural speech as a cue for voicing distinctions. While delays in modern hearing aids are less than those used in this study, possible implications are that additional asynchronous delays from digital signal processing or open-fitting amplification schemes might cause listeners with hearing loss to misperceive voicing cues.


2021 ◽  
Author(s):  
Fatos Myftari

This thesis is concerned with noise reduction in hearing aids. Hearing - impaired listeners and hearing - impaired users have great difficulty understanding speech in a noisy background. This problem has motivated the development and the use of noise reduction algorithms to improve the speech intelligibility in hearing aids. In this thesis, two noise reduction algorithms for single channel hearing instruments are presented, evaluated using objective and subjective tests. The first noise reduction algorithm, conventional Spectral Subtraction, is simulated using MATLAB 6.5, R13. The second noise reduction algorithm, Spectral Subtraction in wavelet domanin is introduced as well. This algorithm is implemented off line, and is compared with conventional Spectral Subtraction. A subjective evaluation demonstrates that the second algorithm has additional advantages in speech intelligibility, in poor listening conditions relative to conventional Spectral Subtraction. The subjective testing was performed with normal hearing listeners, at Ryerson University. The objective evaluation shows that the Spectral Subtraction in wavelet domain has improved Signal to Noise Ratio compared to conventional Spectral Subtraction.


2020 ◽  
Vol 63 (4) ◽  
pp. 1299-1311 ◽  
Author(s):  
Timothy Beechey ◽  
Jörg M. Buchholz ◽  
Gitte Keidser

Objectives This study investigates the hypothesis that hearing aid amplification reduces effort within conversation for both hearing aid wearers and their communication partners. Levels of effort, in the form of speech production modifications, required to maintain successful spoken communication in a range of acoustic environments are compared to earlier reported results measured in unaided conversation conditions. Design Fifteen young adult normal-hearing participants and 15 older adult hearing-impaired participants were tested in pairs. Each pair consisted of one young normal-hearing participant and one older hearing-impaired participant. Hearing-impaired participants received directional hearing aid amplification, according to their audiogram, via a master hearing aid with gain provided according to the NAL-NL2 fitting formula. Pairs of participants were required to take part in naturalistic conversations through the use of a referential communication task. Each pair took part in five conversations, each of 5-min duration. During each conversation, participants were exposed to one of five different realistic acoustic environments presented through highly open headphones. The ordering of acoustic environments across experimental blocks was pseudorandomized. Resulting recordings of conversational speech were analyzed to determine the magnitude of speech modifications, in terms of vocal level and spectrum, produced by normal-hearing talkers as a function of both acoustic environment and the degree of high-frequency average hearing impairment of their conversation partner. Results The magnitude of spectral modifications of speech produced by normal-hearing talkers during conversations with aided hearing-impaired interlocutors was smaller than the speech modifications observed during conversations between the same pairs of participants in the absence of hearing aid amplification. Conclusions The provision of hearing aid amplification reduces the effort required to maintain communication in adverse conditions. This reduction in effort provides benefit to hearing-impaired individuals and also to the conversation partners of hearing-impaired individuals. By considering the impact of amplification on both sides of dyadic conversations, this approach contributes to an increased understanding of the likely impact of hearing impairment on everyday communication.


2021 ◽  
Vol 25 ◽  
pp. 233121652110144
Author(s):  
Ilja Reinten ◽  
Inge De Ronde-Brons ◽  
Rolph Houben ◽  
Wouter Dreschler

Single microphone noise reduction (NR) in hearing aids can provide a subjective benefit even when there is no objective improvement in speech intelligibility. A possible explanation lies in a reduction of listening effort. Previously, we showed that response times (a proxy for listening effort) to an auditory-only dual-task were reduced by NR in normal-hearing (NH) listeners. In this study, we investigate if the results from NH listeners extend to the hearing-impaired (HI), the target group for hearing aids. In addition, we assess the relevance of the outcome measure for studying and understanding listening effort. Twelve HI subjects were asked to sum two digits of a digit triplet in noise. We measured response times to this task, as well as subjective listening effort and speech intelligibility. Stimuli were presented at three signal-to-noise ratios (SNR; –5, 0, +5 dB) and in quiet. Stimuli were processed with ideal or nonideal NR, or unprocessed. The effect of NR on response times in HI listeners was significant only in conditions where speech intelligibility was also affected (–5 dB SNR). This is in contrast to the previous results with NH listeners. There was a significant effect of SNR on response times for HI listeners. The response time measure was reasonably correlated ( R142 = 0.54) to subjective listening effort and showed a sufficient test–retest reliability. This study thus presents an objective, valid, and reliable measure for evaluating an aspect of listening effort of HI listeners.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


1976 ◽  
Vol 19 (2) ◽  
pp. 279-289 ◽  
Author(s):  
Randall B. Monsen

Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F 2 transitions may be an important factor in the low intelligibility of the speech of the deaf.


2020 ◽  
pp. 132-136
Author(s):  
Hiroshi Ikeda ◽  
Shigeyuki Minami

Hearing impaired persons are required to drive with hearing aids to supplement their hearing ability, however, there has not been sufficient discussion regarding the impact of the use of a hearing aid on driving a vehicle. In order to investigate the actual usage and driving conditions of using hearing aids while driving a vehicle, this paper uses a questionnaire to survey (1) how easy it is to drive when wearing hearing aids, and (2) how often hearing aids are not worn while driving. Concerning the ease of driving when wearing a hearing aid, it was suggested that people with congenital hearing loss were more likely to rely on visual information, and those with acquired hearing loss continue to use their experience of hearing. When the level of disability is high, it is difficult to drive when using the hearing aid, and when the disability level is low, it is easier to drive. Regarding the frequency of driving without wearing hearing aids, about 60 % of respondents had such an experience. Those who often drive without hearing aids had experienced headaches due to noise from wearing hearing aids compared to those who wear hearing aids at all times. Hearing aids are necessary assistive devices for hearing impaired persons to obtain hearing information, and to provide a safe driving environment. Therefore, this paper addresses issues to maintain a comfortable driving environment while wearing a hearing aid.


Sign in / Sign up

Export Citation Format

Share Document