gating paradigm
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 4)

H-INDEX

14
(FIVE YEARS 1)

Author(s):  
Jiaqiang Zhu ◽  
Xiaoxiang Chen ◽  
Fei Chen ◽  
Seth Wiener

Purpose: Individuals with congenital amusia exhibit degraded speech perception. This study examined whether adult Chinese Mandarin listeners with amusia were still able to extract the statistical regularities of Mandarin speech sounds, despite their degraded speech perception. Method: Using the gating paradigm with monosyllabic syllable–tone words, we tested 19 Mandarin-speaking amusics and 19 musically intact controls. Listeners heard increasingly longer fragments of the acoustic signal across eight duration-blocked gates. The stimuli varied in syllable token frequency and syllable–tone co-occurrence probability. The correct syllable–tone word, correct syllable-only, correct tone-only, and correct syllable–incorrect tone responses were compared respectively between the two groups using mixed-effects models. Results: Amusics were less accurate than controls in terms of the correct word, correct syllable-only, and correct tone-only responses. Amusics, however, showed consistent patterns of top-down processing, as indicated by more accurate responses to high-frequency syllables, high-probability tones, and tone errors all in manners similar to those of the control listeners. Conclusions: Amusics are able to learn syllable and tone statistical regularities from the language input. This extends previous work by showing that amusics can track phonological segment and pitch cues despite their degraded speech perception. The observed speech deficits in amusics are therefore not due to an abnormal statistical learning mechanism. These results support rehabilitation programs aimed at improving amusics' sensitivity to pitch.


2021 ◽  
pp. 1-20
Author(s):  
Stella KRÜGER ◽  
Aude NOIRAY

Abstract Anticipatory coarticulation is an indispensable feature of speech dynamics contributing to spoken language fluency. Research has shown that children speak with greater degrees of vowel anticipatory coarticulation than adults – that is, greater vocalic influence on previous segments. The present study examined how developmental differences in anticipatory coarticulation transfer to the perceptual domain. Using a gating paradigm, we tested 29 seven-year-olds and 93 German adult listeners with sequences produced by child and adult speakers, hence corresponding to low versus high vocalic anticipatory coarticulation degrees. First, children predicted vowel targets less successfully than adults. Second, greater perceptual accuracy was found for low compared to highly coarticulated speech. We propose that variations in coarticulation degrees reflect perceptually important differences in information dynamics and that listeners are more sensitive to fast changes in information than to a large amount of vocalic information spread across long segmental spans.


Author(s):  
François Grosjean

The author and his family left for the United States in July 1974 where he joined the Psychology Department at Northeastern University. He recounts his impressions during their first years there, both at work and in everyday life. The family’s first boy, Cyril, became the conduit to things American. A sojourn that was to last one year became a twelve year stay. The author headed the new Linguistics Program, taught, and did research. It is at MIT that the author developed the gating paradigm which he used to study spoken word recognition. During those years, he met and started doing research with James Gee.


2018 ◽  
Author(s):  
Federica Falagiarda ◽  
Olivier Collignon

Humans seamlessly extract and integrate the emotional content delivered by the face and the voice of others. It is however poorly understood how perceptual decisions unfold in time when people discriminate the expression of emotions transmitted using dynamic facial and vocal signals, as in natural social context. In this study, we relied on a gating paradigm to track how the recognition of emotion expressions across the senses unfold over exposure time. We first demonstrate that across all emotions tested, a discriminatory decision is reached earlier with faces than with voices. Importantly, multisensory stimulation consistently reduced the required accumulation of perceptual evidences needed to reach correct discrimination (Isolation Point). We also observed that expressions with different emotional content provide cumulative evidence at different speeds, with “fear” being the expression with the fastest isolation point across the senses. Finally, the lack of correlation between the confusion patterns in response to facial and vocal signals across time suggest distinct relations between the discriminative features extracted from the two signals. All together, these results provide a comprehensive view on how auditory, visual and audiovisual information related to different emotion expressions accumulate in time, highlighting how multisensory context can fasten the discrimination process when minimal information is available.


2018 ◽  
Vol 50 (2) ◽  
pp. 75-87 ◽  
Author(s):  
Nash N. Boutros ◽  
Klevest Gjini ◽  
Frank Wang ◽  
Susan M. Bowyer

Heterogeneity of schizophrenia is a major obstacle toward understanding the disorder. One likely subtype is the deficit syndrome (DS) where patients suffer from predominantly negative symptoms. This study investigated the evoked responses and the evoked magnetic fields to identify the neurophysiological deviations associated with the DS. Ten subjects were recruited for each group (Control, DS, and Nondeficit schizophrenia [NDS]). Subjects underwent magnetoencephalography (MEG) and electroencephalography (EEG) testing while listening to an oddball paradigm to generate the P300 as well as a paired click paradigm to generate the mid-latency auditory-evoked responses (MLAER) in a sensory gating paradigm. MEG–coherence source imaging (CSI) during P300 task revealed a significantly higher average coherence value in DS than NDS subjects in the gamma band (30-80 Hz), when listening to standard stimuli but only NDS subjects had a higher average coherence level in the gamma band than controls when listening to the novel sounds. P50, N100, and P3a ERP amplitudes (EEG analysis) were significantly decreased in NDS compared with DS subjects. The data suggest that the deviations in the 2 patient groups are qualitatively different. Deviances in NDS patients suggest difficulty in both early (as in the gating paradigm), as well as later top-down processes (P300 paradigm). The main deviation in the DS group was an exaggerated responsiveness to ongoing irrelevant stimuli detected by EEG whereas NDS subjects had an exaggerated response to novelty.


2017 ◽  
Vol 61 (3) ◽  
pp. 358-383 ◽  
Author(s):  
Marco van de Ven ◽  
Mirjam Ernestus

In natural conversations, words are generally shorter and they often lack segments. It is unclear to what extent such durational and segmental reductions affect word recognition. The present study investigates to what extent reduction in the initial syllable hinders word comprehension, which types of segments listeners mostly rely on, and whether listeners use word duration as a cue in word recognition. We conducted three experiments in Dutch, in which we adapted the gating paradigm to study the comprehension of spontaneously uttered conversational speech by aligning the gates with the edges of consonant clusters or vowels. Participants heard the context and some segmental and/or durational information from reduced target words with unstressed initial syllables. The initial syllable varied in its degree of reduction, and in half of the stimuli the vowel was not clearly present. Participants gave too short answers if they were only provided with durational information from the target words, which shows that listeners are unaware of the reductions that can occur in spontaneous speech. More importantly, listeners required fewer segments to recognize target words if the vowel in the initial syllable was absent. This result strongly suggests that this vowel hardly plays a role in word comprehension, and that its presence may even delay this process. More important are the consonants and the stressed vowel.


2016 ◽  
Vol 45 (5) ◽  
pp. 665-681 ◽  
Author(s):  
Niklas Büdenbender ◽  
Gunter Kreutz

We investigated the effects of familiarity, level of musical expertise, musical tempo, and structural boundaries on the identification of familiar and unfamiliar tunes. Healthy Western listeners ( N = 62; age range 14–64 years) judged their level of familiarity with a preselected set of melodies when the number of tones of a given melody was increased from trial to trial according to the so-called gating paradigm. The number of tones served as one dependent measure. The second dependent measure was the physical duration of the stimulus presentation until listeners identified a melody as familiar or unfamiliar. Results corroborate previous work, suggesting that listeners need less information to recognize familiar as compared to unfamiliar melodies. Both decreasing and increasing the original tempo by a factor of two delayed the identification of familiar melodies. Furthermore, listeners had more difficulty identifying unfamiliar melodies when tempo was increased. Finally, musical expertise significantly influenced identification of either melodic category, i.e., reducing the required number of tones. Taken together, the findings support theories which suggest that tempo information is coded in melody representation, and that musical expertise is associated with especially efficient strategies for accessing long-term representations of melodic materials.


Cortex ◽  
2014 ◽  
Vol 59 ◽  
pp. 84-94 ◽  
Author(s):  
Barbara Tillmann ◽  
Philippe Albouy ◽  
Anne Caclin ◽  
Emmanuel Bigand

Sign in / Sign up

Export Citation Format

Share Document