The Study of Bone Vibrator Positions : Normal Hearing and Conductive and Sensorineural Hearing Loss

2007 ◽  
Vol 3 (2) ◽  
pp. 139-144
Author(s):  
Seung Chul Lee ◽  
Jinsook Kim
Author(s):  
Jawahar Antony P ◽  
Animesh Barman

Background and Aim: Auditory stream segre­gation is a phenomenon that splits sounds into different streams. The temporal cues that contri­bute for stream segregation have been previ­ously studied in normal hearing people. In peo­ple with sensorineural hearing loss (SNHL), the cues for temporal envelope coding is not usually affected, while the temporal fine structure cues are affected. These two temporal cues depend on the amplitude modulation frequency. The present study aimed to evaluate the effect of sin­usoidal amplitude modulated (SAM) broadband noises on stream segregation in individuals with SNHL. Methods: Thirty normal hearing subjects and 30 subjects with mild to moderate bilateral SNHL participated in the study. Two experi­ments were performed; in the first experiment, the AB sequence of broadband SAM stimuli was presented, while in the second experiment, only B sequence was presented. A low (16 Hz) and a high (256 kHz) standard modulation fre­quency were used in these experiments. The subjects were asked to find the irregularities in the rhythmic sequence. Results: Both the study groups could identify the irregularities similarly in both the experi­ments. The minimum cumulative delay was sli­ghtly higher in the SNHL group. Conclusion: It is suggested that the temporal cues provided by the broadband SAM noises for low and high standard modulation frequencies were not used for stream segregation by either normal hearing subjects or those with SNHL. Keywords: Stream segregation; sinusoidal amplitude modulation; sensorineural hearing loss


2019 ◽  
Vol 162 (1) ◽  
pp. 129-136 ◽  
Author(s):  
Evette A. Ronner ◽  
Liliya Benchetrit ◽  
Patricia Levesque ◽  
Razan A. Basonbul ◽  
Michael S. Cohen

Objective To assess quality of life (QOL) in pediatric patients with sensorineural hearing loss (SNHL) with the Pediatric Quality of Life Inventory 4.0 (PedsQL 4.0) and the Hearing Environments and Reflection on Quality of Life 26 (HEAR-QL-26) and HEAR-QL-28 surveys. Study Design Prospective longitudinal study. Setting Tertiary care center. Subjects and Methods Surveys were administered to patients with SNHL (ages 2-18 years) from July 2016 to December 2018 at a multidisciplinary hearing loss clinic. Patients aged >7 years completed the HEAR-QL-26, HEAR-QL-28, and PedsQL 4.0 self-report tool, while parents completed the PedsQL 4.0 parent proxy report for children aged ≤7 years. Previously published data from children with normal hearing were used for controls. The independent t test was used for analysis. Results In our cohort of 100 patients, the mean age was 7.7 years (SD, 4.5): 62 participants had bilateral SNHL; 63 had mild to moderate SNHL; and 37 had severe to profound SNHL. Sixty-eight patients used a hearing device. Mean (SD) total survey scores for the PedsQL 4.0 (ages 2-7 and 8-18 years), HEAR-QL-26 (ages 7-12 years), and HEAR-QL-28 (ages 13-18 years) were 83.9 (14.0), 79.2 (11.1), 81.2 (9.8), and 77.5 (11.3), respectively. Mean QOL scores for patients with SNHL were significantly lower than those for controls on the basis of previously published normative data ( P < .0001). There was no significant difference in QOL between children with unilateral and bilateral SNHL or between children with SNHL who did and did not require a hearing device. Low statistical power due to small subgroup sizes limited our analysis. Conclusion It is feasible to collect QOL data from children with SNHL in a hearing loss clinic. Children with SNHL had significantly lower scores on validated QOL instruments when compared with peers with normal hearing.


2018 ◽  
Vol 144 (3) ◽  
pp. 1834-1834
Author(s):  
M. P. Feeney ◽  
Kim Schairer ◽  
Douglas H. Keefe ◽  
Denis Fitzpatrick ◽  
Daniel Putterman ◽  
...  

2019 ◽  
Vol 59 (4) ◽  
pp. 254-262 ◽  
Author(s):  
Maria Huber ◽  
Sebastian Roesch ◽  
Belinda Pletzer ◽  
Julia Lukaschyk ◽  
Anke Lesinski-Schiedat ◽  
...  

1999 ◽  
Vol 42 (4) ◽  
pp. 773-784 ◽  
Author(s):  
Christopher W. Turner ◽  
Siu-Ling Chi ◽  
Sarah Flock

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.


1978 ◽  
Vol 21 (1) ◽  
pp. 5-36 ◽  
Author(s):  
Marilyn D. Wang ◽  
Charlotte M. Reed ◽  
Robert C. Bilger

It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six low-pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in a sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of the individual listener’s audiogram is given.


Sign in / Sign up

Export Citation Format

Share Document