bimodal condition
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 1)

H-INDEX

6
(FIVE YEARS 0)

Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman ◽  
Michael J. Crites

Due to lack of visual or auditory perceptual information, many tasks require interpersonal coordination and teaming. Dyadic verbal and/or auditory communication typically results in the two people becoming informationally coupled. This experiment examined coupling by using a two-person remote navigation task where one participant blindly drove a remote-controlled car while another participant provided auditory, visual, or a combination of both cues (bimodal). Under these conditions, we evaluated performance using easy, moderate, and hard task difficulties. We predicted that the visual condition would have higher performance measures overall, and the bimodal condition would have higher performance as difficulty increased. Results indicated that visual coupling performs better overall compared to auditory coupling and that bimodal coupling showed increased performance as task difficulty went from moderate to hard. When auditory coupling occurs, the frequency at which teams communicate affects performance— the faster teams spoke, the better they performed, even with visual communication available.


2016 ◽  
Vol 59 (1) ◽  
pp. 99-109 ◽  
Author(s):  
Jennifer R. Fowler ◽  
Jessica L. Eggleston ◽  
Kelly M. Reavis ◽  
Garnett P. McMillan ◽  
Lina A. J. Reiss

PurposeThe objective was to determine whether speech perception could be improved for bimodal listeners (those using a cochlear implant [CI] in one ear and hearing aid in the contralateral ear) by removing low-frequency information provided by the CI, thereby reducing acoustic–electric overlap.MethodSubjects were adult CI subjects with at least 1 year of CI experience. Nine subjects were evaluated in the CI-only condition (control condition), and 26 subjects were evaluated in the bimodal condition. CIs were programmed with 4 experimental programs in which the low cutoff frequency (LCF) was progressively raised. Speech perception was evaluated using Consonant-Nucleus-Consonant words in quiet, AzBio sentences in background babble, and spondee words in background babble.ResultsThe CI-only group showed decreased speech perception in both quiet and noise as the LCF was raised. Bimodal subjects with better hearing in the hearing aid ear (< 60 dB HL at 250 and 500 Hz) performed best for words in quiet as the LCF was raised. In contrast, bimodal subjects with worse hearing (> 60 dB HL at 250 and 500 Hz) performed similarly to the CI-only group.ConclusionsThese findings suggest that reducing low-frequency overlap of the CI and contralateral hearing aid may improve performance in quiet for some bimodal listeners with better hearing.


2015 ◽  
Vol 24 (4) ◽  
pp. 462-468 ◽  
Author(s):  
Jessica J. Messersmith ◽  
Lindsey E. Jorgensen ◽  
Jessica A. Hagg

Purpose The purpose of this study was to determine whether an alternate fitting strategy, specifically adjustment to gains in a hearing aid (HA), would improve performance in patients who experienced poorer performance in the bimodal condition when the HA was fit to traditional targets. Method This study was a retrospective chart review from a local clinic population seen during a 6-month period. Participants included 6 users of bimodal stimulation. Two performed poorer in the cochlear implant (CI) + HA condition than in the CI-only condition. One individual performed higher in the bimodal condition, but the overall performance was low. Three age range–matched users whose performance increased when the HA was used in conjunction with a CI were also included. The HA gain was reduced beyond 2000 Hz. Speech perception scores were obtained pre- and postmodification to the HA fitting. Results All listeners whose HA was programmed using the modified approach demonstrated improved speech perception scores with the modified HA fit in the bimodal condition when compared with the traditional HA fit in the bimodal condition. Conclusion Modifications to gains above 2000 Hz in the HA may improve performance for bimodal listeners who perform more poorly in the bimodal condition when the HA is fit to traditional targets.


2015 ◽  
Vol 24 (2) ◽  
pp. 243-249 ◽  
Author(s):  
Hannah W. Siburt ◽  
Alice E. Holmes

Purpose The purpose of this study was to determine the current clinical practice in approaches to bimodal programming in the United States. To be specific, if clinicians are recommending bimodal stimulation, who programs the hearing aid in the bimodal condition, and what method is used for programming the hearing aid? Method An 11-question online survey was created and sent via email to a comprehensive list of cochlear implant programming centers in the United States. The survey was sent to 360 recipients. Results Respondents in this study represented a diverse group of clinical settings (response rate: 26%). Results indicate little agreement about who programs the hearing aids, when they are programmed, and how they are programmed in the bimodal condition. Analysis of small versus large implant centers indicated small centers are less likely to add a device to the contralateral ear. Conclusions Although a growing number of cochlear implant recipients choose to wear a hearing aid on the contralateral ear, there is inconsistency in the current clinical approach to bimodal programming. These survey results provide evidence of large variability in the current bimodal programming practices and indicate a need for more structured clinical recommendations and programming approaches.


2012 ◽  
Vol 25 (0) ◽  
pp. 161-162
Author(s):  
Rachel L. Wright ◽  
Mark T. Elliott ◽  
Laura C. Spurgeon ◽  
Alan M. Wing

When information is available in more than one sensory modality, the central nervous system will integrate the cues to obtain a statistically optimal estimate of the event or object perceived (Alais and Burr, 2004; Ernst and Banks, 2002). For synchronising movements to a stream of events, this multisensory advantage is observed with reduced temporal variability of the movements compared to unimodal conditions (Elliott et al., 2010, 2011; Wing et al., 2010). Currently, this has been demonstrated for upper limb movements (finger tapping). Here, we investigate synchronisation of lower limb movements (stepping on the spot) to auditory, visual and combined auditory-visual metronome cues. In addition, we compare movement corrections to a phase perturbation in the metronome for the three sensory modality conditions. We hypothesised that, as with upper limb movements, there would be a multisensory advantage, with stepping variability being lowest in the bimodal condition. As such, we further expected correction to the phase perturbation to be quickest in the bimodal condition. Our results show that while we see evidence of multisensory integration taking place, there was no multisensory advantage in the phase correction task — correction under the bimodal condition was almost identical to the auditory-only condition. Both bimodal and auditory-only conditions showed larger corrections for each step after the perturbation, compared to the visual-only condition. We conclude that rapid lower limb corrections are possible when synchronising with salient, regular auditory cues, such that integration of information from other modalities does not improve correction efficiency. However, if the auditory modality is less reliable it is likely that multisensory cues would become advantageous in such a task.


2012 ◽  
Vol 25 (0) ◽  
pp. 205
Author(s):  
Karin Petrini ◽  
Alicia Remark ◽  
Louise Smith ◽  
Marko Nardini

To perform everyday tasks, such as crossing a road, we greatly rely on our sight. However, certain situations (e.g., an extremely dark environment) as well as visual impairments can either reduce the reliability of or completely remove this sensory information. In these cases, the use of other information is vital. Here we seek to examine the development of haptic and auditory integration. Three different groups of adults and 5- to 12-year-old children were asked to judge which of a standard sized and a variably sized ball was the largest. One group performed the task with auditory information only, haptic only or both. Auditory information about object size came from the loudness of a naturalistic sound played when observers knocked the ball against a touch-pad. A second group performed the same conditions, while wearing a thick glove to reduce the reliability of the haptic information. Finally, a third group performed the task with either congruent or incongruent information. Psychometric functions were fitted to responses in order to measure observers’ sensitivities to object size under these different conditions. Integration of haptic and auditory information predicts greater sensitivity in the bimodal condition than in either single-modality condition. Initial results show that young children do not integrate information from haptic and auditory modalities, with some children aged below 8 years performing worse in the bimodal condition than in the auditory-only condition. Older children and adults seem able to integrate auditory and haptic information, especially when the reliability of the haptic information is reduced.


2009 ◽  
Vol 20 (06) ◽  
pp. 353-373 ◽  
Author(s):  
Lisa G. Potts ◽  
Margaret W. Skinner ◽  
Ruth A. Litovsky ◽  
Michael J. Strube ◽  
Francis Kuk

Background: The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). Purpose: This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. Research Design: A repeated-measures correlational study was completed. Study Sample: Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. Intervention: The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Data Collection and Analysis: Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six–eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Results: Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant–only and hearing aid–only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1–3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. Conclusions: These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid.


2008 ◽  
Vol 79 ◽  
pp. 21-29
Author(s):  
Desiree Capel ◽  
Elise de Bree ◽  
Annemarie Kerkhoff ◽  
Frank Wijnen

Phonemes are perceived categorically and this perception is language-specific for adult listeners. Infants initially are "universal" listeners, capable of discriminating both native and non-native speech contrasts. This ability disappears in the first year of life. Maye et al. (Cognition (2002)) propose that statistical learning is responsible for this change to language-specific perception. They were the first to show that infants of 6 and 8 months old use statistical distribution of phonetic variation in learning to discriminate speech sounds. A replication of this experiment studied 10-11-month-old Dutch infants. They were exposed to either a bimodal or a unimodal frequency distribution of an 8-step speech sound continuum based on the Hindi voiced and voiceless retroflex plosives (/da/ en /ta/). The results show that only infants in the bimodal condition could discriminate the contrast, representing the speech sounds in two categories rather than one.


2007 ◽  
Vol 18 (09) ◽  
pp. 760-776 ◽  
Author(s):  
Erin C. Schafer ◽  
Amyn M. Amlani ◽  
Andi Seibold ◽  
Pamela L. Shattuck

A meta-analytic approach was used to examine sixteen peer-reviewed publications related to speech-recognition performance in noise at fixed signal-to-noise ratios for participants who use bilateral cochlear implants (CIs) or bimodal stimulation. Two hundred eighty-seven analyses were conducted to compare the underlying contributions of binaural summation, binaural squelch, and the head-shadow effect compared to monaural conditions (CI or hearing aid). The analyses revealed an overall significant effect for binaural summation, binaural squelch, and head shadow for the bilateral and bimodal listeners relative to monaural conditions. In addition, all within-condition (bilateral or bimodal) comparisons were significant for the three binaural effects, with the exception of the bimodal condition compared to a monaural CI. No significant differences were detected between the bilateral and bimodal listeners for any of the binaural phenomena. Clinical implications and recommendations are discussed as they relate to empirical findings. Se utilizó un enfoque de meta-análisis para examinar dieciséis publicaciones con revisión editorial relacionadas con el desempeño en reconocimiento del lenguaje en medio de ruido a tasas de señal-ruido fijas, para participantes que usaban implantes cocleares bilaterales (IC) o estimulación bimodal. Se condujeron doscientos ochenta y siete análisis para comparar la contribución subyacente de la sumación bi-auricular, el chapoteo bi-auricular, y el efecto de sombra de la cabeza, en comparación con las condiciones mono-auriculares (IC y auxiliar auditivo). El análisis reveló un efecto global significativo para la sumación bi-auricular, el chapoteo bi-auricular y la sombra de la cabeza para el sujeto con audición bilateral y bimodal, en relación con las condiciones monoauriculares. Además, todas las comparaciones dentro de la misma condición (bilateral o bimodal) fueron significativas para los tres efectos bi-auriculares, con la excepción de la condición bimodal, comparada con un IC monoauricular. No se detectaron diferencias significativas entre sujetos en condición bilateral y bimodal para ninguno de los fenómenos bi-auriculares. Las implicaciones clínicas y las recomendaciones se discuten en tanto se relacionan con los hallazgos empíricos.


1998 ◽  
Vol 58 ◽  
pp. 185-192
Author(s):  
Lydius Nienhuis ◽  
Heleen de Hondt

In this article we address the question whether in word learning the effects of a bimodal (reading and pronouncing) condition are superior to the effects obtained in a monomodal (written only) condition. Research in the domains of psychology, psycholinguistics and foreign language learning lends support to the hypothesis that the effects of bimodal presentation and learning of words will be superior. In our experiment, pupils of intermediate classes from three different schools had to learn twelve French words in a bimodal condition: the words were presented in a text in a listening + reading condition; then the pupils learned the words by reading and pronouncing them. Pupils of three other (parallel) classes from the same three schools only read the same text and learned the words in a writing condition. The results of our investigation provide modest evidence for better retention in the bimodal condition: overall scores of the imodal' classes proved superior to scores of the monomodal classes. But this result is almost exclusively due to the results of only one school; in the two other schools test results did not differ significantly. This may be due to the small number of words. In subsequent research, the number of words to be learned will have to be larger than the modest number of twelve, in order to provide more convincing results.


Sign in / Sign up

Export Citation Format

Share Document