Survey of telephone speech-signal statistics

1962 ◽  
Vol 8 (85) ◽  
pp. 39-39
1992 ◽  
Vol 13 (2) ◽  
pp. 70-79 ◽  
Author(s):  
Mark Terry ◽  
Kathryn Bright ◽  
Mike Durian ◽  
Laura Kepler ◽  
Richard Sweetman ◽  
...  

2014 ◽  
Vol 25 (10) ◽  
pp. 952-968 ◽  
Author(s):  
Stephen Julstrom ◽  
Linda Kozma-Spytek

Background: In order to better inform the development and revision of the American National Standards Institute C63.19 and American National Standards Institute/Telecommunications Industry Association-1083 hearing aid compatibility standards, a previous study examined the signal strength and signal (speech)-to-noise (interference) ratio needs of hearing aid users when using wireless and cordless phones in the telecoil coupling mode. This study expands that examination to cochlear implant (CI) users, in both telecoil and microphone modes of use. Purpose: The purpose of this study was to evaluate the magnetic and acoustic signal levels needed by CI users for comfortable telephone communication and the users’ tolerance relative to the speech levels of various interfering wireless communication–related noise types. Research Design: Design was a descriptive and correlational study. Simulated telephone speech and eight interfering noise types presented as continuous signals were linearly combined and were presented together either acoustically or magnetically to the participants’ CIs. The participants could adjust the loudness of the telephone speech and the interfering noises based on several assigned criteria. Study Sample: The 21 test participants ranged in age from 23–81 yr. All used wireless phones with their CIs, and 15 also used cordless phones at home. There were 12 participants who normally used the telecoil mode for telephone communication, whereas 9 used the implant’s microphone; all were tested accordingly. Data Collection and Analysis: A guided-intake questionnaire yielded general background information for each participant. A custom-built test control box fed by prepared speech-and-noise files enabled the tester or test participant, as appropriate, to switch between the various test signals and to precisely control the speech-and-noise levels independently. The tester, but not the test participant, could read and record the selected levels. Subsequent analysis revealed the preferred speech levels, speech (signal)-to-noise ratios, and the effect of possible noise-measurement weighting functions. Results: The participants' preferred telephone speech levels subjectively matched or were somewhat lower than the level that they heard from a 65 dB SPL wideband reference. The mean speech (signal)-to-noise ratio requirement for them to consider their telephone experience “acceptable for normal use” was 20 dB, very similar to the results for the hearing aid users of the previous study. Significant differences in the participants’ apparent levels of noise tolerance among the noise types when the noise level was determined using A-weighting were eliminated when a CI-specific noise-measurement weighting was applied. Conclusions: The results for the CI users in terms of both preferred levels for wireless and cordless phone communication and signal-to-noise requirements closely paralleled the corresponding results for hearing aid users from the previous study, and showed no significant differences between the microphone and telecoil modes of use. Signal-to-noise requirements were directly related to the participants’ noise audibility threshold and were independent of noise type when appropriate noise-measurement weighting was applied. Extending the investigation to include noncontinuous interfering noises and forms of radiofrequency interference other than additive audiofrequency noise could be areas of future study.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Cevahir Parlak ◽  
Yusuf Altun

In this article, a novel pitch determination algorithm based on harmonic differences method (HDM) is proposed. Most of the algorithms today rely on autocorrelation, cepstrum, and lastly convolutional neural networks, and they have some limitations (small datasets, wideband or narrowband, musical sounds, temporal smoothing, etc.), accuracy, and speed problems. There are very rare works exploiting the spacing between the harmonics. HDM is designed for both wideband and exclusively narrowband (telephone) speech and tries to find the most repeating difference between the harmonics of speech signal. We use three vowel databases in our experiments, namely, Hillenbrand Vowel Database, Texas Vowel Database, and Vowels from the TIMIT corpus. We compare HDM with autocorrelation, cepstrum, YIN, YAAPT, CREPE, and FCN algorithms. Results show that harmonic differences are reliable and fast choice for robust pitch detection. Also, it is superior to others in most cases.


Author(s):  
Martin Chavant ◽  
Alexis Hervais-Adelman ◽  
Olivier Macherey

Purpose An increasing number of individuals with residual or even normal contralateral hearing are being considered for cochlear implantation. It remains unknown whether the presence of contralateral hearing is beneficial or detrimental to their perceptual learning of cochlear implant (CI)–processed speech. The aim of this experiment was to provide a first insight into this question using acoustic simulations of CI processing. Method Sixty normal-hearing listeners took part in an auditory perceptual learning experiment. Each subject was randomly assigned to one of three groups of 20 referred to as NORMAL, LOWPASS, and NOTHING. The experiment consisted of two test phases separated by a training phase. In the test phases, all subjects were tested on recognition of monosyllabic words passed through a six-channel “PSHC” vocoder presented to a single ear. In the training phase, which consisted of listening to a 25-min audio book, all subjects were also presented with the same vocoded speech in one ear but the signal they received in their other ear differed across groups. The NORMAL group was presented with the unprocessed speech signal, the LOWPASS group with a low-pass filtered version of the speech signal, and the NOTHING group with no sound at all. Results The improvement in speech scores following training was significantly smaller for the NORMAL than for the LOWPASS and NOTHING groups. Conclusions This study suggests that the presentation of normal speech in the contralateral ear reduces or slows down perceptual learning of vocoded speech but that an unintelligible low-pass filtered contralateral signal does not have this effect. Potential implications for the rehabilitation of CI patients with partial or full contralateral hearing are discussed.


2011 ◽  
Vol 21 (2) ◽  
pp. 44-54
Author(s):  
Kerry Callahan Mandulak

Spectral moment analysis (SMA) is an acoustic analysis tool that shows promise for enhancing our understanding of normal and disordered speech production. It can augment auditory-perceptual analysis used to investigate differences across speakers and groups and can provide unique information regarding specific aspects of the speech signal. The purpose of this paper is to illustrate the utility of SMA as a clinical measure for both clinical speech production assessment and research applications documenting speech outcome measurements. Although acoustic analysis has become more readily available and accessible, clinicians need training with, and exposure to, acoustic analysis methods in order to integrate them into traditional methods used to assess speech production.


Sign in / Sign up

Export Citation Format

Share Document