Functional MRI of auditory verbal working memory: long-term reproducibility analysis

NeuroImage ◽  
2004 ◽  
Vol 21 (3) ◽  
pp. 1000-1008 ◽  
Author(s):  
Xingchang Wei ◽  
Seung-Schik Yoo ◽  
Chandlee C. Dickey ◽  
Kelly H. Zou ◽  
Charles R.G. Guttmann ◽  
...  
Author(s):  
Thomas Jacobsen ◽  
Erich Schröger

Abstract. Working memory uses central sound representations as an informational basis. The central sound representation is the temporally and feature-integrated mental representation that corresponds to phenomenal perception. It is used in (higher-order) mental operations and stored in long-term memory. In the bottom-up processing path, the central sound representation can be probed at the level of auditory sensory memory with the mismatch negativity (MMN) of the event-related potential. The present paper reviews a newly developed MMN paradigm to tap into the processing of speech sound representations. Preattentive vowel categorization based on F1-F2 formant information occurs in speech sounds and complex tones even under conditions of high variability of the auditory input. However, an additional experiment demonstrated the limits of the preattentive categorization of language-relevant information. It tested whether the system categorizes complex tones containing the F1 and F2 formant components of the vowel /a/ differently than six sounds with nonlanguage-like F1-F2 combinations. From the absence of an MMN in this experiment, it is concluded that no adequate vowel representation was constructed. This shows limitations of the capability of preattentive vowel categorization.


1997 ◽  
Vol 17 (24) ◽  
pp. 9675-9685 ◽  
Author(s):  
John E. Desmond ◽  
John D. E. Gabrieli ◽  
Anthony D. Wagner ◽  
Bruce L. Ginier ◽  
Gary H. Glover

NeuroImage ◽  
2001 ◽  
Vol 13 (6) ◽  
pp. 1106
Author(s):  
Jill M. Thompson ◽  
Paul J. Monkst ◽  
Adrian J. Lloyd ◽  
C. Louise Harrison ◽  
Ed T. Bulhnoret ◽  
...  

2017 ◽  
Vol 60 (8) ◽  
pp. 2321-2336 ◽  
Author(s):  
Cynthia R. Hunter ◽  
William G. Kronenberger ◽  
Irina Castellanos ◽  
David B. Pisoni

PurposeWe sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes.MethodThirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes.ResultsSpeech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors.ConclusionSpeech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants.Supplemental materialshttps://doi.org/10.23641/asha.5216200


2001 ◽  
Vol 13 (6) ◽  
pp. 766-785 ◽  
Author(s):  
Antonino Raffone ◽  
Gezinus Wolters

Luck and Vogel (1997) showed that the storage capacity of visual working memory is about four objects and that this capacity does not depend on the number of features making up the objects. Thus, visual working memory seems to process integrated objects rather than individual features, just as verbal working memory handles higher-order “chunks” instead of individual features or letters. In this article, we present a model based on synchronization and desynchronization of reverberatory neural assemblies, which can parsimoniously account for both the limited capacity of visual working memory, and for the temporary binding of multiple assemblies into a single pattern. A critical capacity of about three to four independent patterns showed up in our simulations, consistent with the results of Luck and Vogel. The same desynchronizing mechanism optimizing phase segregation between assemblies coding for separate features or multifeature objects poses a limit to the number of oscillatory reverberations. We show how retention of multiple features as visual chunks (feature conjunctions or objects) in terms of synchronized reverberatory assemblies may be achieved with and without long-term memory guidance.


2001 ◽  
Vol 11 (1) ◽  
pp. 13-21 ◽  
Author(s):  
Takashi Tsukiura ◽  
Toshikatsu Fujii ◽  
Toshimitsu Takahashi ◽  
Ruiting Xiao ◽  
Masahiko Inase ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document