listening condition
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 12)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Julia Pauquet ◽  
Christiane M. Thiel ◽  
Christian Mathys ◽  
Stephanie Rosemann

Age-related hearing loss has been associated with increased recruitment of frontal brain areas during speech perception to compensate for the decline in auditory input. This additional recruitment may bind resources otherwise needed for understanding speech. However, it is unknown how increased demands on listening interact with increasing cognitive demands when processing speech in age-related hearing loss. The current study used a full-sentence working memory task manipulating demands on working memory and listening and studied untreated mild to moderate hard of hearing ( n = 20 ) and normal-hearing age-matched participants ( n = 19 ) with functional MRI. On the behavioral level, we found a significant interaction of memory load and listening condition; this was, however, similar for both groups. Under low, but not high memory load, listening condition significantly influenced task performance. Similarly, under easy but not difficult listening conditions, memory load had a significant effect on task performance. On the neural level, as measured by the BOLD response, we found increased responses under high compared to low memory load conditions in the left supramarginal gyrus, left middle frontal gyrus, and left supplementary motor cortex regardless of hearing ability. Furthermore, we found increased responses in the bilateral superior temporal gyri under easy compared to difficult listening conditions. We found no group differences nor interactions of group with memory load or listening condition. This suggests that memory load and listening condition interacted on a behavioral level, however, only the increased memory load was reflected in increased BOLD responses in frontal and parietal brain regions. Hence, when evaluating listening abilities in elderly participants, memory load should be considered as it might interfere with the assessed performance. We could not find any further evidence that BOLD responses for the different memory and listening conditions are affected by mild to moderate age-related hearing loss.


2021 ◽  
pp. 1-9
Author(s):  
Yang-Soo Yoon ◽  
Ivy Mills ◽  
BaileyAnn Toliver ◽  
Christine Park ◽  
George Whitaker ◽  
...  

Purpose We compared frequency difference limens (FDLs) in normal-hearing listeners under two listening conditions: sequential and simultaneous. Method Eighteen adult listeners participated in three experiments. FDL was measured using a method of limits for comparison frequency. In the sequential listening condition, the tones were presented with a half-second time interval in between, but for the simultaneous listening condition, the tones were presented simultaneously. For the first experiment, one of four reference tones (125, 250, 500, or 750 Hz), which was presented to the left ear, was paired with one of four starting comparison tones (250, 500, 750, or 1000 Hz), which was presented to the right ear. The second and third experiments had the same testing conditions as the first experiment except with two- and three-tone complexes, comparison tones. The subjects were asked if the tones sounded the same or different. When a subject chose “different,” the comparison frequency decreased by 10% of the frequency difference between the reference and comparison tones. FDLs were determined when the subjects chose “same” 3 times in a row. Results FDLs were significantly broader (worse) with simultaneous listening than with sequential listening for the two- and three-tone complex conditions but not for the single-tone condition. The FDLs were narrowest (best) with the three-tone complex under both listening conditions. FDLs broadened as the testing frequencies increased for the single tone and the two-tone complex. The FDLs were not broadened at frequencies > 250 Hz for the three-tone complex. Conclusion The results suggest that sequential and simultaneous frequency discriminations are mediated by different processes at different stages in the auditory pathway for complex tones, but not for pure tones.


2021 ◽  
Vol 25 ◽  
pp. 233121652110141
Author(s):  
Robert T. Dwyer ◽  
Chen Chen ◽  
Phillipp Hehrmann ◽  
Nichole C. Dwyer ◽  
René H. Gifford

Individuals with bilateral cochlear implants (BiCIs) rely mostly on interaural level difference (ILD) cues to localize stationary sounds in the horizontal plane. Independent automatic gain control (AGC) in each device can distort this cue, resulting in poorer localization of stationary sound sources. However, little is known about how BiCI listeners perceive sound in motion. In this study, 12 BiCI listeners’ spatial hearing abilities were assessed for both static and dynamic listening conditions when the sound processors were synchronized by applying the same compression gain to both devices as a means to better preserve the original ILD cues. Stimuli consisted of band-pass filtered (100–8000 Hz) Gaussian noise presented at various locations or panned over an array of loudspeakers. In the static listening condition, the distance between two sequentially presented stimuli was adaptively varied to arrive at the minimum audible angle, the smallest spatial separation at which the listener can correctly determine whether the second sound was to the left or right of the first. In the dynamic listening condition, participants identified if a single stimulus moved to the left or to the right. Velocity was held constant and the distance the stimulus traveled was adjusted using an adaptive procedure to determine the minimum audible movement angle. Median minimum audible angle decreased from 17.1° to 15.3° with the AGC synchronized. Median minimum audible movement angle decreased from 100° to 25.5°. These findings were statistically significant and support the hypothesis that synchronizing the AGC better preserves ILD cues and results in improved spatial hearing abilities. However, restoration of the ILD cue alone was not enough to bridge the large performance gap between BiCI listeners and normal-hearing listeners on these static and dynamic spatial hearing measures.


Author(s):  
Sadie Schilaty ◽  
Sarah Hargus Ferguson ◽  
Shae D. Morgan ◽  
Caroline Champougny

Abstract Background Older adults with hearing loss often report difficulty understanding British-accented speech, such as in television or movies, after having understood such speech in the past. A few studies have examined the intelligibility of various United States regional and non-U.S. varieties of English for American listeners, but only for young adults with normal hearing. Purpose This preliminary study sought to determine whether British-accented sentences were less intelligible than American-accented sentences for American younger and older adults with normal hearing and for older adults with hearing loss. Research Design A mixed-effects design, with talker accent and listening condition as within-subjects factors and listener group as a between-subjects factor. Study Sample Three listener groups consisting of 16 young adults with normal hearing, 15 older adults with essentially normal hearing, and 22 older adults with sloping sensorineural hearing loss. Data Collection and Analysis Sentences produced by one General American English speaker and one British English speaker were presented to listeners at 70 dB sound pressure level in quiet and in babble. Signal-to-noise ratios for the latter varied among the listener groups. Responses were typed into a textbox and saved on each trial. Effects of accent, listening condition, and listener group were assessed using linear mixed-effects models. Results American- and British-accented sentences were equally intelligible in quiet, but intelligibility in noise was lower for British-accented sentences than American-accented sentences. These intelligibility differences were similar for all three groups. Conclusion British-accented sentences were less intelligible than those produced by an American talker, but only in noise.


2020 ◽  
Vol 31 (10) ◽  
pp. 701-707
Author(s):  
Jung-sun Hwang ◽  
Yukyeong Jung ◽  
Jae Hee Lee

Abstract Background Auditory working memory is a crucial factor for complex cognitive tasks such as speech-in-noise understanding because speech communication in noise engages multiple auditory and cognitive capacities to encode, store, and retrieve information. An immediate free recall task of words has been used frequently as a measure of auditory working memory capacity. Purpose The present study investigated performance on the immediate free recall of words in quiet and noisy conditions for hearing-impaired listeners. Research Design Fifty hearing-impaired listeners (30 younger and 20 older) participated in this study. Lists of 10 phonetically and lexically balanced words were presented with a fixed presentation rate in quiet and noise conditions. Target words were presented at an individually determined most comfortable level (MCL). Participants were required to recall as many of the words in an arbitrary order immediately after the end of the list. Serial position curves were determined from the accuracy of free recall as a function of the word position in the sequence. Data Collection and Analysis Three-way analyses of variance with repeated measures were conducted on the percent-correct word recall scores, with two independent within-group factors (serial position and listening condition) and a between-group factor (younger, older). Results A traditional serial position curve was found in hearing-impaired listeners, yet the serial position effects depended on the listening condition. In quiet, the listeners with hearing loss were likely to recall more words from the initial and final positions compared with the middle-position words. In multi-talker babble noise, more difficulties were observed when recalling the words in the initial position compared with the words in the final position. Conclusion Without a noise, a traditional U-shaped serial position curve consisting of primacy and recency effects was observed from hearing-impaired listeners, in accord with previous findings from normal-hearing listeners. The adverse impact of background noise was more pronounced in the primacy effect than in the recency effect.


2020 ◽  
Vol 1 (4) ◽  
pp. 452-473
Author(s):  
Chad S. Rogers ◽  
Michael S. Jones ◽  
Sarah McConkey ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19–30 years) and 32 older adults (aged 65–81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.


Author(s):  
Stuart Webb ◽  
Anna C.-S. Chang

Abstract There has been little research investigating how mode of input affects incidental vocabulary learning, and no study examining how it affects the learning of multiword items. The aim of this study was to investigate incidental learning of L2 collocations in three different modes: reading, listening, and reading while listening. One hundred thirty-eight second-year college students learning EFL in Taiwan were randomly assigned to three experimental groups (reading, listening, reading while listening) and a no treatment control group. The experimental groups encountered 17 target collocations in the same graded reader. Learning was measured using two tests that involved matching the component words and recalling their meanings. The results indicated that the reading while listening condition was most effective while the reading and listening conditions contributed to similarly sized gains. The findings suggest that listening may play a more important role in learning collocations than single-word items.


2020 ◽  
Author(s):  
Julia Pauquet ◽  
Christiane Thiel ◽  
Christian Mathys ◽  
Stephanie Rosemann

Age-related hearing loss has been associated with increased recruitment of frontal brain areas during speech perception to compensate for the decline in auditory input. This additional recruitment may bind resources otherwise needed for understanding speech. However, it is unknown how increased demands on listening interact with increasing cognitive demands when processing speech in age-related hearing loss. The current study used a full-sentence working memory task manipulating demands on working memory and listening and studied untreated mild to moderate hard of hearing (n = 20) and normal-hearing age-matched participants (n = 19) with functional MRI. On the behavioral level, we found a significant interaction of memory load and listening condition; this was, however, similar for both groups. Under low, but not high memory load, listening condition significantly influenced task performance. Similarly, under easy but not difficult listening conditions, memory load had a significant effect on task performance. On the neural level, we found increased responses under high compared to low memory load conditions in the left supramarginal gyrus, left middle frontal gyrus and left supplementary motor cortex regardless of hearing ability. Furthermore, we found increased responses in the bilateral superior temporal gyri under easy compared to difficult listening conditions. We found no group differences nor interactions of groups with memory load or listening conditions. This suggests that memory load and listening conditions interacted on a behavioral level, however, only the increased memory load was reflected in increased neural responses in frontal and parietal brain regions. Hence, when evaluating listening abilities in elderly participants, memory load should be considered as it might interfere with the assessed performance. We could not find any further evidence that neural mechanisms of auditory speech processing are affected by mild to moderate age-related hearing loss.


Sign in / Sign up

Export Citation Format

Share Document