Auditory and Audiovisual Feature Transmission in Hearing-Impaired Adults

1975 ◽  
Vol 18 (2) ◽  
pp. 272-280 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Don W. Worthington

Auditory and audiovisual consonant recognition were studied in 98 hearing-impaired adults, who demonstrated a wide range of consonant-recognition abilities. Information transfer analysis was used to describe the performance of the subjects on the auditory and audiovisual tasks in terms of a set of articulatory features. Visual cues substantially enhanced the transmission of duration, place-of-articulation, frication, and nasality features, but had considerably less effect on transmission of the liquid-glide and voicing features. The improvement in transmission resulting from visual cues was relatively constant across a wide range of auditory performance levels.

1974 ◽  
Vol 17 (2) ◽  
pp. 270-278 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Don W. Worthington

The redundancy between the auditory and visual recognition of consonants was studied in 100 hearing-impaired subjects who demonstrated a wide range of speech-discrimination abilities. Twenty English consonants, recorded in CV combination with the vowel /a/, were presented to the subjects for auditory, visual, and audiovisual identification. There was relatively little variation among subjects in the visual recognition of consonants. A measure of the expected degree of redundancy between an observer’s auditory and visual confusions among consonants was used in an effort to predict audiovisual consonant recognition ability. This redundancy measure was based on an information analysis of an observer’s auditory confusions among consonants and expressed the degree to which his auditory confusions fell within categories of visually homophenous consonants. The measure was found to have moderate predictive value in estimating an observer’s audiovisual consonant recognition score. These results suggest that the degree of redundancy between an observer’s auditory and visual confusions of speech elements is a determinant in the benefit that visual cues offer to that observer.


1978 ◽  
Vol 43 (3) ◽  
pp. 331-347 ◽  
Author(s):  
Elmer Owens

An analysis of consonant errors for hearing-impaired subjects in a multiple-choice format revealed that about 14 consonants caused most of the difficulty in consonant recognition. For a given consonant, error probability was typically lower in the initial position of the stimulus word than in the final position. When errors were made, the substitutions were limited typically to two or three other consonants, with a greater variety occurring for consonants in the final position. Substitutions tended to be the same over a wide range of pure-tone configurations. Place errors were predominant, but manner errors also occurred. In only a few instances did specific relationships occur between particular stimulus consonants and pure-tone configurations. With knowledge of the error consonants and typical substitutions, auditory recognition of consonants can be improved by programmed instruction methods. Shaping can be accomplished by a manipulation of the response foils (choices). Since it has been shown that visual recognition of consonants can also be improved, advantage can be taken of both the visual and auditory modalities in remedial procedures. Frequency of usage in the language should be considered in the ordering of consonants for retraining purposes. Work in consonant recognition should be beneficial to the hearing-impaired patient as part of a total rehabilitation program.


2009 ◽  
Vol 126 (5) ◽  
pp. 2683-2694 ◽  
Author(s):  
Sandeep A. Phatak ◽  
Yang-soo Yoon ◽  
David M. Gooler ◽  
Jont B. Allen

1998 ◽  
Vol 103 (2) ◽  
pp. 1098-1114 ◽  
Author(s):  
Elizabeth Kennedy ◽  
Harry Levitt ◽  
Arlene C. Neuman ◽  
Mark Weiss

2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Hyun Jae Baek ◽  
Min Hye Chang ◽  
Jeong Heo ◽  
Kwang Suk Park

Brain-computer interfaces (BCIs) aim to enable people to interact with the external world through an alternative, nonmuscular communication channel that uses brain signal responses to complete specific cognitive tasks. BCIs have been growing rapidly during the past few years, with most of the BCI research focusing on system performance, such as improving accuracy or information transfer rate. Despite these advances, BCI research and development is still in its infancy and requires further consideration to significantly affect human experience in most real-world environments. This paper reviews the most recent studies and findings about ergonomic issues in BCIs. We review dry electrodes that can be used to detect brain signals with high enough quality to apply in BCIs and discuss their advantages, disadvantages, and performance. Also, an overview is provided of the wide range of recent efforts to create new interface designs that do not induce fatigue or discomfort during everyday, long-term use. The basic principles of each technique are described, along with examples of current applications in BCI research. Finally, we demonstrate a user-friendly interface paradigm that uses dry capacitive electrodes that do not require any preparation procedure for EEG signal acquisition. We explore the capacitively measured steady-state visual evoked potential (SSVEP) response to an amplitude-modulated visual stimulus and the auditory steady-state response (ASSR) to an auditory stimulus modulated by familiar natural sounds to verify their availability for BCI. We report the first results of an online demonstration that adopted this ergonomic approach to evaluating BCI applications. We expect BCI to become a routine clinical, assistive, and commercial tool through advanced EEG monitoring techniques and innovative interface designs.


2017 ◽  
Vol 12 (4) ◽  
pp. 448-454 ◽  
Author(s):  
Erik Schrödter ◽  
Gert-Peter Brüggemann ◽  
Steffen Willwacher

Purpose:To describe the stretch-shortening behavior of ankle plantar-flexing muscle–tendon units (MTUs) during the push-off in a sprint start.Methods:Fifty-four male (100-m personal best: 9.58–12.07 s) and 34 female (100-m personal best: 11.05–14.00 s) sprinters were analyzed using an instrumented starting block and 2-dimensional high-speed video imaging. Analysis was performed separately for front and rear legs, while accounting for block obliquities and performance levels.Results:The results showed clear signs of a dorsiflexion in the upper ankle joint (front block 15.8° ± 7.4°, 95% CI 13.2–18.2°; rear block 8.0° ± 5.7°, 95% CI 6.4–9.7°) preceding plantar flexion. When observed in their natural block settings, the athletes’ block obliquity did not significantly affect push-off characteristics. It seems that the stretch-shortening-cycle-like motion of the soleus MTU has an enhancing influence on push-off force generation.Conclusion:This study provides the first systematic observation of ankle-joint stretch-shortening behavior for sprinters of a wide range of performance levels. The findings highlight the importance of reactive-type training for the improvement of starting performance. Nonetheless, future studies need to resolve the independent contributions of tendinous and muscle-fascicle structures to overall MTU performance.


2012 ◽  
Vol 16 (2) ◽  
pp. 551-562 ◽  
Author(s):  
S. Patil ◽  
M. Stieglitz

Abstract. Prediction of streamflow at ungauged catchments requires transfer of hydrologic information (e.g., model parameters, hydrologic indices, streamflow values) from gauged (donor) to ungauged (receiver) catchments. A common metric used for the selection of ideal donor catchments is the spatial proximity between donor and receiver catchments. However, it is not clear whether information transfer among nearby catchments is suitable across a wide range of climatic and geographic regions. We examine this issue using the data from 756 catchments within the continental United States. Each catchment is considered ungauged in turn and daily streamflow is simulated through distance-based interpolation of streamflows from neighboring catchments. Results show that distinct geographic regions exist in US where transfer of streamflow values from nearby catchments is useful for retrospective prediction of daily streamflow at ungauged catchments. Specifically, the high predictability catchments (Nash-Sutcliffe efficiency NS > 0.7) are confined to the Appalachian Mountains in eastern US, the Rocky Mountains, and the Cascade Mountains in the Pacific Northwest. Low predictability catchments (NS < 0.3) are located mostly in the drier regions west of Mississippi river, which demonstrates the limited utility of gauged catchments in those regions for predicting at ungauged basins. The results suggest that high streamflow similarity among nearby catchments (and therefore, good predictability at ungauged catchments) is more likely in humid runoff-dominated regions than in dry evapotranspiration-dominated regions. We further find that higher density and/or closer distance of gauged catchments near an ungauged catchment does not necessarily guarantee good predictability at an ungauged catchment.


Sign in / Sign up

Export Citation Format

Share Document