listening task
Recently Published Documents


TOTAL DOCUMENTS

248
(FIVE YEARS 67)

H-INDEX

28
(FIVE YEARS 3)

2021 ◽  
Vol 118 (52) ◽  
pp. e2113887118
Author(s):  
Yang Zhang ◽  
Yue Ding ◽  
Juan Huang ◽  
Wenjing Zhou ◽  
Zhipei Ling ◽  
...  

Humans have an extraordinary ability to recognize and differentiate voices. It is yet unclear whether voices are uniquely processed in the human brain. To explore the underlying neural mechanisms of voice processing, we recorded electrocorticographic signals from intracranial electrodes in epilepsy patients while they listened to six different categories of voice and nonvoice sounds. Subregions in the temporal lobe exhibited preferences for distinct voice stimuli, which were defined as “voice patches.” Latency analyses suggested a dual hierarchical organization of the voice patches. We also found that voice patches were functionally connected under both task-engaged and resting states. Furthermore, the left motor areas were coactivated and correlated with the temporal voice patches during the sound-listening task. Taken together, this work reveals hierarchical cortical networks in the human brain for processing human voices.


2021 ◽  
Vol 15 ◽  
Author(s):  
Moïra-Phoebé Huet ◽  
Christophe Micheyl ◽  
Etienne Parizet ◽  
Etienne Gaudrain

During the past decade, several studies have identified electroencephalographic (EEG) correlates of selective auditory attention to speech. In these studies, typically, listeners are instructed to focus on one of two concurrent speech streams (the “target”), while ignoring the other (the “masker”). EEG signals are recorded while participants are performing this task, and subsequently analyzed to recover the attended stream. An assumption often made in these studies is that the participant’s attention can remain focused on the target throughout the test. To check this assumption, and assess when a participant’s attention in a concurrent speech listening task was directed toward the target, the masker, or neither, we designed a behavioral listen-then-recall task (the Long-SWoRD test). After listening to two simultaneous short stories, participants had to identify keywords from the target story, randomly interspersed among words from the masker story and words from neither story, on a computer screen. To modulate task difficulty, and hence, the likelihood of attentional switches, masker stories were originally uttered by the same talker as the target stories. The masker voice parameters were then manipulated to parametrically control the similarity of the two streams, from clearly dissimilar to almost identical. While participants listened to the stories, EEG signals were measured and subsequently, analyzed using a temporal response function (TRF) model to reconstruct the speech stimuli. Responses in the behavioral recall task were used to infer, retrospectively, when attention was directed toward the target, the masker, or neither. During the model-training phase, the results of these behavioral-data-driven inferences were used as inputs to the model in addition to the EEG signals, to determine if this additional information would improve stimulus reconstruction accuracy, relative to performance of models trained under the assumption that the listener’s attention was unwaveringly focused on the target. Results from 21 participants show that information regarding the actual – as opposed to, assumed – attentional focus can be used advantageously during model training, to enhance subsequent (test phase) accuracy of auditory stimulus-reconstruction based on EEG signals. This is the case, especially, in challenging listening situations, where the participants’ attention is less likely to remain focused entirely on the target talker. In situations where the two competing voices are clearly distinct and easily separated perceptually, the assumption that listeners are able to stay focused on the target is reasonable. The behavioral recall protocol introduced here provides experimenters with a means to behaviorally track fluctuations in auditory selective attention, including, in combined behavioral/neurophysiological studies.


2021 ◽  
Author(s):  
Grace M Clements ◽  
Mate Gyurkovics ◽  
Kathy A Low ◽  
Diane M Beck ◽  
Monica Fabiani ◽  
...  

In the face of multiple sensory streams, there may be competition for processing resources in multimodal cortical area devoted to establishing representations. In such cases, alpha oscillations may serve to maintain the relevant representations and protect them from interference, whereas theta oscillations may facilitate their updating when needed. It can be hypothesized that these oscillations would differ in response to an auditory stimulus when the eyes are open or closed, as intermodal resource competition may be more prominent in the former than in the latter case. Across two studies we investigated the role of alpha and theta power in multimodal competition using an auditory task with the eyes open and closed, respectively enabling and disabling visual processing in parallel with the incoming auditory stream. In a passive listening task (Study 1a), we found alpha suppression following a pip tone with both eyes open and closed, but subsequent alpha enhancement only with closed eyes. We replicated this eyes-closed alpha enhancement in an independent sample (Study 1b). In an active auditory oddball task (Study 2), we again observed the eyes open/eyes closed alpha pattern found in Study 1 and also demonstrated that the more attentionally demanding oddball trials elicit the largest oscillatory effects. Theta power did not interact with eye status in either study. We propose a hypothesis to account for the findings in which alpha may be endemic to multimodal cortical areas in addition to visual ones.


Author(s):  
Sara Kennedy

Abstract In this article, the constructs of intelligibility, comprehensibility, and discourse-level understanding in second language (L2) speech are analyzed for their conceptual and methodological characteristics. The analysis is complemented by a case study of listeners’ understanding of two matched L2 English speakers, who completed three speaking tasks over 17 weeks. One listening task focused on word/phrase recognition and one focused on semantic and pragmatic understanding. Results showed two different profiles for the two speakers. When listeners had difficulty understanding, for one speaker it was often due to word/phrase recognition problems, while for the other speaker it was often due to ambiguity in the pragmatic or functional meaning of the speech. Implications are discussed for the ways in which L2 speech is elicited, evaluated, and taught.


2021 ◽  
Vol 19 (4) ◽  
pp. 0-0

Metacognitive intervention of listening has prevailed in L2 (Second Language) listening research in the past decade. However, rare research has linked metacognitive intervention with online listening. This study examines L2 learners’ development of metacognitive awareness of listening through online metacognitive listening practice. A set of online metacognitive listening exercises were constructed, based on a metacognitive cycle that regularly guides learners through metacognitive processes of listening. Thirty-nine low-proficiency Chinese university EFL listeners from one intact class participated in the study and did online listening practice as individual outside-class homework for 14 weeks. The development of metacognitive awareness was measured by MALQ and enriched by the learners' reflective notes. Results reveal an inverted U-shape pattern in the development of metacognitive awareness and that the factors of metacognitive awareness develop asynchronously. Some factors appear more susceptible to listening task difficulty and more unstable in the development process.


2021 ◽  
Author(s):  
James Brown ◽  
Alex Chatburn ◽  
David Wright ◽  
Maarten Immink

Post-training meditation has been shown to promote wakeful motor memory stabilization in experienced meditators. We investigated the effect of single-session mindfulness meditation on wakeful and sleep-dependent forms of implicit motor memory consolidation in mediation naïve adults. Immediately after implicit sequence training, participants (N = 20, 8 females, Mage = 23.9 years ± 3.3) completed either a 10-minute focused attention meditation (N = 10), aiming to direct and sustain attention to breathing, or a control listening task. They were then exposed to interference through novel sequence training. Trained sequence performance was tested following a 5-hour wakeful period and again after a 15-hour period, which included sleep. Bayesian inference was applied to group comparison of mean reaction time (MRT) changes across training, interference, wakeful and post-sleep time points. Relative to control conditions, post-training meditation reduced novel sequence interference (BF10 = 6.61) and improved wakeful motor memory consolidation (BF10 = 8.34). No group differences in sleep consolidation were evident (BF10 = 0.38). These findings illustrate that post-training mindfulness meditation expedites wakeful offline learning of an implicit motor sequence in meditation naïve adults. Interleaving mindfulness meditation between acquisition of a target motor sequence and exposure to an interfering motor sequence reduced proactive and retroactive inference. Post-training mindfulness meditation did not enhance nor inhibit sleep-dependent offline learning of a target implicit motor sequence. Previous meditation training is not required to obtain wakeful consolidation gains from post-training mindfulness meditation.


2021 ◽  
Vol 11 (10) ◽  
pp. 1271
Author(s):  
Maria E. Barnes-Davis ◽  
Hisako Fujiwara ◽  
Georgina Drury ◽  
Stephanie L. Merhar ◽  
Nehal A. Parikh ◽  
...  

Extreme prematurity (EPT, <28 weeks gestation) is associated with language problems. We previously reported hyperconnectivity in EPT children versus term children (TC) using magnetoencephalography (MEG). Here, we aim to ascertain whether functional hyperconnectivity is a marker of language resiliency for EPT children, validating our earlier work with a distinct sample of contemporary well-performing EPT and preterm children with history of language delay (EPT-HLD). A total of 58 children (17 EPT, 9 EPT-HLD, and 32 TC) participated in stories listening during MEG and functional magnetic resonance imaging (fMRI) at 4–6 years. We compared connectivity in EPT and EPT-HLD, investigating relationships with language over time. We measured fMRI activation during stories listening and parcellated the activation map to obtain “nodes” for MEG connectivity analysis. There were no significant group differences in age, sex, race, ethnicity, parental education, income, language scores, or language representation on fMRI. MEG functional connectivity (weighted phase lag index) was significantly different between groups. Preterm children had increased connectivity, replicating our earlier work. EPT and EPT-HLD had hyperconnectivity versus TC at 24–26 Hz, with EPT-HLD exhibiting greatest connectivity. Network strength correlated with change in standardized scores from 2 years to 4–6 years of age, suggesting hyperconnectivity is a marker of advancing language development.


2021 ◽  
Vol 12 ◽  
Author(s):  
Pratik Bhandari ◽  
Vera Demberg ◽  
Jutta Kray

Previous studies have shown that at moderate levels of spectral degradation, semantic predictability facilitates language comprehension. It is argued that when speech is degraded, listeners have narrowed expectations about the sentence endings; i.e., semantic prediction may be limited to only most highly predictable sentence completions. The main objectives of this study were to (i) examine whether listeners form narrowed expectations or whether they form predictions across a wide range of probable sentence endings, (ii) assess whether the facilitatory effect of semantic predictability is modulated by perceptual adaptation to degraded speech, and (iii) use and establish a sensitive metric for the measurement of language comprehension. For this, we created 360 German Subject-Verb-Object sentences that varied in semantic predictability of a sentence-final target word in a graded manner (high, medium, and low) and levels of spectral degradation (1, 4, 6, and 8 channels noise-vocoding). These sentences were presented auditorily to two groups: One group (n =48) performed a listening task in an unpredictable channel context in which the degraded speech levels were randomized, while the other group (n =50) performed the task in a predictable channel context in which the degraded speech levels were blocked. The results showed that at 4 channels noise-vocoding, response accuracy was higher in high-predictability sentences than in the medium-predictability sentences, which in turn was higher than in the low-predictability sentences. This suggests that, in contrast to the narrowed expectations view, comprehension of moderately degraded speech, ranging from low- to high- including medium-predictability sentences, is facilitated in a graded manner; listeners probabilistically preactivate upcoming words from a wide range of semantic space, not limiting only to highly probable sentence endings. Additionally, in both channel contexts, we did not observe learning effects; i.e., response accuracy did not increase over the course of experiment, and response accuracy was higher in the predictable than in the unpredictable channel context. We speculate from these observations that when there is no trial-by-trial variation of the levels of speech degradation, listeners adapt to speech quality at a long timescale; however, when there is a trial-by-trial variation of the high-level semantic feature (e.g., sentence predictability), listeners do not adapt to low-level perceptual property (e.g., speech quality) at a short timescale.


2021 ◽  
Vol 32 (9) ◽  
pp. 1416-1425
Author(s):  
Niels Chr. Hansen ◽  
Haley E. Kragness ◽  
Peter Vuust ◽  
Laurel Trainor ◽  
Marcus T. Pearce

Anticipating the future is essential for efficient perception and action planning. Yet the role of anticipation in event segmentation is understudied because empirical research has focused on retrospective cues such as surprise. We address this concern in the context of perception of musical-phrase boundaries. A computational model of cognitive sequence processing was used to control the information-dynamic properties of tone sequences. In an implicit, self-paced listening task ( N = 38), undergraduates dwelled longer on tones generating high entropy (i.e., high uncertainty) than on those generating low entropy (i.e., low uncertainty). Similarly, sequences that ended on tones generating high entropy were rated as sounding more complete ( N = 31 undergraduates). These entropy effects were independent of both the surprise (i.e., information content) and phrase position of target tones in the original musical stimuli. Our results indicate that events generating high entropy prospectively contribute to segmentation processes in auditory sequence perception, independently of the properties of the subsequent event.


Sign in / Sign up

Export Citation Format

Share Document