How Much is the Noise Level be Reduced? – Speech Recognition Threshold in Noise Environments Using a Parametric Speaker –

2021 ◽  
pp. 542-550
Author(s):  
Noko Kuratomo ◽  
Tadashi Ebihara ◽  
Naoto Wakatsuki ◽  
Koichi Mizutani ◽  
Keiichi Zempo
2020 ◽  
Vol 50 (1) ◽  
pp. 9
Author(s):  
Widayat Alviandi ◽  
Jenny Bashiruddin ◽  
Brastho Bramantyo ◽  
Farisa Rizky

Background: Patients with hearing disturbance will generally undergo pure tone audiometry andspeech audiometry in a quiet room, but those examinations cannot evaluate the ability to understand speech in daily environment with a noisy background. Words in noise test will provide valuable informationregarding patient’s hearing problem in noise. Purpose: To evaluate the hearing threshold using wordsin noise test in adults with normal hearing. Method: This cross-sectional study was conducted in CiptoMangunkusumo Hospital from January to April 2017. All subjects who fulfilled the inclusion and exclusioncriteria underwent pure tone audiometry, speech audiometry, and words in noise test. Results: A total of71 individuals with normal hearing were recruited for this study. Words in noise test showed the medianvalue of 67 dB and 100 dB for Speech Recognition Threshold (SRT) 50% and Speech DiscriminationScore (SDS) 100%, respectively. The SRT 50% and SDS 100% were significantly higher in the age group40–60 years compared to the age group 18–39 years. There was also a statistically significant differencebetween males and females at SRT 50% assessed by words in noise audiometry. Conclusion: Wordsin noise test showed a statistically significant difference in SRT 50% and SDS 100% between two agegroups, but no difference was found between genders. The result of this study can be used as a referencefor SRT and SDS values of speech audiometry test in noise.Keywords: words in noise, speech audiometry, speech recognition threshold, speech discrimination score ABSTRAKLatar belakang: Pasien dengan gangguan pendengaran umumnya menjalani pemeriksaanaudiometri nada murni dan audiometri tutur di ruangan yang sunyi, tetapi pemeriksaan ini tidakdapat menggambarkan kemampuan pemahaman wicara di lingkungan sehari-hari yang ramai. Testutur dalam bising dapat mengevaluasi masalah pendengaran pasien dalam keadaan bising. Tujuan:Untuk mengevaluasi ambang pendengaran menggunakan tes tutur dalam bising pada orang dewasadengan pendengaran normal. Metode: Penelitian potong lintang ini dilakukan di Rumah Sakit CiptoMangunkusumo dari Januari hingga April 2017. Semua subjek yang memenuhi kriteria inklusi daneksklusi menjalani pemeriksaan audiometri nada murni, audiometri tutur, dan tes tutur dalam bising.Hasil: Sebanyak 71 orang dengan pendengaran normal diikutsertakan dalam penelitian ini. Tes tuturdalam bising menunjukkan nilai median masing-masing 67 dB dan 100 dB pada Speech RecognitionThreshold (SRT) 50% dan Speech Discrimination Score (SDS) 100%. SRT 50% dan SDS 100% secarasignifikan lebih tinggi pada kelompok usia 40–60 tahun dibandingkan dengan kelompok usia 18–39 tahun. Hasil pemeriksaan tes tutur dalam bising menunjukkan perbedaan yang signifikan antara laki-laki dan wanita pada nilai SRT 50%. Kesimpulan: Tes tutur dalam bising menunjukkan perbedaan yang bermakna secara statistik pada SRT 50% dan SDS 100% antara dua kelompok umur, tetapi tidak ada perbedaan signifikan diantara jenis kelamin. Hasil penelitian ini dapat digunakan sebagai acuan untuk nilai SRT dan SDS pada pemeriksaan audiometri tutur dalam bising.


2021 ◽  
Vol 13 ◽  
Author(s):  
Larry E. Humes

Many older adults have difficulty understanding speech in noisy backgrounds. In this study, we examined peripheral auditory, higher-level auditory, and cognitive factors that may contribute to such difficulties. A convenience sample of 137 volunteer older adults, 90 women, and 47 men, ranging in age from 47 to 94 years (M = 69.2 and SD = 10.1 years) completed a large battery of tests. Auditory tests included measures of pure-tone threshold, clinical and psychophysical, as well as two measures of gap-detection threshold and four measures of temporal-order identification. The latter included two monaural and two dichotic listening conditions. In addition, cognition was assessed using the complete Wechsler Adult Intelligence Scale-3rd Edition (WAIS-III). Two monaural measures of speech-recognition threshold (SRT) in noise, the QuickSIN, and the WIN, were obtained from each ear at relatively high presentation levels of 93 or 103 dB SPL to minimize audibility concerns. Group data, both aggregate and by age decade, were evaluated initially to allow comparison to data in the literature. Next, following the application of principal-components factor analysis for data reduction, individual differences in speech-recognition-in-noise performance were examined using multiple-linear-regression analyses. Excellent fits were obtained, accounting for 60–77% of the total variance, with most accounted for by the audibility of the speech and noise stimuli and the severity of hearing loss with the balance primarily associated with cognitive function.


1994 ◽  
Vol 79 (2) ◽  
pp. 1003-1008 ◽  
Author(s):  
Stanley Coren ◽  
A. Ralph Hakstian

Hearing sensitivity is most commonly still reported in terms of pure tone thresholds. Unfortunately, simple procedures for predicting Speech Recognition Thresholds from Pure Tone Thresholds are not currently available. To remedy this problem, pure tone thresholds were collected from 802 individuals over the range of 250 to 8000 Hz. Five subsets of pure tone thresholds which are commonly used to report hearing status were then considered. An average correlation of 0.878 was found between the various pure tone indexes and the speech recognition threshold. Using regressions between pure tone and the speech measure, a table was constructed that allows conversion of the various pure tone indexes to a predicted speech recognition threshold and involves only a very simple computation.


2019 ◽  
Vol 9 (10) ◽  
pp. 2066
Author(s):  
Ayodeji Opeyemi Abioye ◽  
Stephen D. Prior ◽  
Peter Saddington ◽  
Sarvapali D. Ramchurn

This paper investigated the effects of varying noise levels and varying lighting levels on speech and gesture control command interfaces for aerobots. The aim was to determine the practical suitability of the multimodal combination of speech and visual gesture in human aerobotic interaction, by investigating the limits and feasibility of use of the individual components. In order to determine this, a custom multimodal speech and visual gesture interface was developed using CMU (Carnegie Mellon University) sphinx and OpenCV (Open source Computer Vision) libraries, respectively. An experiment study was designed to measure the individual effects of each of the two main components of speech and gesture, and 37 participants were recruited to participate in the experiment. The ambient noise level was varied from 55 dB to 85 dB. The ambient lighting level was varied from 10 Lux to 1400 Lux, under different lighting colour temperature mixtures of yellow (3500 K) and white (5500 K), and different background for capturing the finger gestures. The results of the experiment, which consisted of around 3108 speech utterance and 999 gesture quality observations, were presented and discussed. It was observed that speech recognition accuracy/success rate falls as noise levels rise, with 75 dB noise level being the aerobot’s practical application limit, as the speech control interaction becomes very unreliable due to poor recognition beyond this. It was concluded that multi-word speech commands were considered more reliable and effective than single-word speech commands. In addition, some speech command words (e.g., land) were more noise resistant than others (e.g., hover) at higher noise levels, due to their articulation. From the results of the gesture-lighting experiment, the effects of both lighting conditions and the environment background on the quality of gesture recognition, was almost insignificant, less than 0.5%. The implication of this is that other factors such as the gesture capture system design and technology (camera and computer hardware), type of gesture being captured (upper body, whole body, hand, fingers, or facial gestures), and the image processing technique (gesture classification algorithms), are more important in developing a successful gesture recognition system. Some further works were suggested based on the conclusions drawn from this findings which included using alternative ASR (Automatic Speech Recognition) speech models and developing more robust gesture recognition algorithm.


1987 ◽  
Vol 30 (3) ◽  
pp. 377-386 ◽  
Author(s):  
Dianne J. Van Tasell ◽  
Jeilry L. Yanz

Speech recognition threshold (SRT) was measured in quiet and in noise for normal-hearing subjects and subjects with high-frequency sensorineural hearing loss. For the hearing-impaired subjects, SRT in quiet approximated the amount of hearing loss in the frequency region of importance for each of two sets of speech materials—spondees and monosyllables. With changes in frequency response of the stimulus delivery system, SRT shifted differentially for spondees and monosyllables. The speed, reliability, and apparent sensitivity of the SRT in quiet and noise to frequency response characteristics make it a potentially useful tool for hearing aid evaluation if speech materials appropriate to both the hearing loss configuration and the frequency response of amplification are chosen.


2014 ◽  
Vol 23 (2) ◽  
pp. 182-189 ◽  
Author(s):  
Ishara Ramkissoon ◽  
Julie M. Estis ◽  
Ashley Gaal Flagge

2009 ◽  
Vol 20 (07) ◽  
pp. 409-421 ◽  
Author(s):  
Jace Wolfe ◽  
Erin C. Schafer ◽  
Benjamin Heldner ◽  
Hans Mülder ◽  
Emily Ward ◽  
...  

Background: Use of personal frequency-modulated (FM) systems significantly improves speech recognition in noise for users of cochlear implants (CIs). Previous studies have shown that the most appropriate gain setting on the FM receiver may vary based on the listening situation and the manufacturer of the CI system. Unlike traditional FM systems with fixed-gain settings, Dynamic FM automatically varies the gain of the FM receiver with changes in the ambient noise level. There are no published reports describing the benefits of Dynamic FM use for CI recipients or how Dynamic FM performance varies as a function of CI manufacturer. Purpose: To evaluate speech recognition of Advanced Bionics Corporation or Cochlear Corporation CI recipients using Dynamic FM vs. a traditional FM system and to examine the effects of Autosensitivity on the FM performance of Cochlear Corporation recipients. Research Design: A two-group repeated-measures design. Participants were assigned to a group according to their type of CI. Study Sample: Twenty-five subjects, ranging in age from 8 to 82 years, met the inclusion criteria for one or more of the experiments. Thirteen subjects used Advanced Bionics Corporation, and 12 used Cochlear Corporation implants. Intervention: Speech recognition was assessed while subjects used traditional, fixed-gain FM systems and Dynamic FM systems. Data Collection and Analysis: In Experiments 1 and 2, speech recognition was evaluated with a traditional, fixed-gain FM system and a Dynamic FM system using the Hearing in Noise Test sentences in quiet and in classroom noise. A repeated-measures analysis of variance (ANOVA) was used to evaluate effects of CI manufacturer (Advanced Bionics and Cochlear Corporation), type of FM system (traditional and dynamic), noise level, and use of Autosensitivity for users of Cochlear Corporation implants. Experiment 3 determined the effects of Autosensitivity on speech recognition of Cochlear Corporation implant recipients when listening through the speech processor microphone with the FM system muted. A repeated-measures ANOVA was used to examine the effects of signal-to-noise ratio and Autosensitivity. Results: In Experiment 1, use of Dynamic FM resulted in better speech recognition in noise for Advanced Bionics recipients relative to traditional FM at noise levels of 65, 70, and 75 dB SPL. Advanced Bionics recipients obtained better speech recognition in noise with FM use when compared to Cochlear Corporation recipients. When Autosensitivity was enabled in Experiment 2, the performance of Cochlear Corporation recipients was equivalent to that of Advanced Bionics recipients, and Dynamic FM was significantly better than traditional FM. Results of Experiment 3 indicate that use of Autosensitivity improves speech recognition in noise of signals directed to the speech processor relative to no Autosensitivity. Conclusions: Dynamic FM should be considered for use with persons with CIs to improve speech recognition in noise. At default CI settings, FM performance is better for Advanced Bionics recipients when compared to Cochlear Corporation recipients, but use of Autosensitivity by Cochlear Corporation users results in equivalent group performance.


Author(s):  
S. B. Rathna Kumar ◽  
Madhu Sudharshan Reddy. B ◽  
Sale Kranthi

<p class="abstract"><strong>Background:</strong> <span lang="EN-IN">The present study aimed to develop word lists in Telugu for assessing speech recognition threshold which might serve as equivalent and alternative forms to the existing word lists. </span></p><p class="abstract"><strong>Methods:</strong> <span lang="EN-IN">A total of two word lists were developed using compound words with each list consisting of 25 words. Equivalence analysis and performance-intensity function testing was carried out using two word lists on a total of 75 native speakers of Telugu who were equally divided into three groups.  </span></p><p class="abstract"><strong>Results:</strong> <span lang="EN-IN">The results revealed that there was no statistically significant difference (p&gt;0.05) in the speech recognition performance between three groups for each word list, and between two word lists for each group. Hence, the two word lists developed were found to be equally difficult for all the groups and can be used interchangeably. The performance-intensity (PI) function curve showed semi-linear function, and the subjects reached the beginning of the plateau at 3 dBSL where they reached more than 90% speech recognition score for two word lists, and reached 100% speech recognition score at 6 dBSL. The 50% speech recognition score which corresponds to SRT was obtained at less than 1.5 dBSL for two word lists suggesting good agreement between PTA and SRT. </span></p><p class="abstract"><strong>Conclusions:</strong> <span lang="EN-IN">The findings of the study are similar to the findings of existing word lists in Telugu. Thus the developed word lists in the present study can be considered equivalent and alternative forms to existing word lists in Telugu.</span></p>


Sign in / Sign up

Export Citation Format

Share Document