Journal of Cognitive Neuroscience
Latest Publications


TOTAL DOCUMENTS

4164
(FIVE YEARS 491)

H-INDEX

215
(FIVE YEARS 10)

Published By Mit Press

1530-8898, 0898-929x

2022 ◽  
pp. 1-13
Author(s):  
Audrey Siqi-Liu ◽  
Tobias Egner ◽  
Marty G. Woldorff

Abstract To adaptively interact with the uncertainties of daily life, we must match our level of cognitive flexibility to contextual demands—being more flexible when frequent shifting between different tasks is required and more stable when the current task requires a strong focus of attention. Such cognitive flexibility adjustments in response to changing contextual demands have been observed in cued task-switching paradigms, where the performance cost incurred by switching versus repeating tasks (switch cost) scales inversely with the proportion of switches (PS) within a block of trials. However, the neural underpinnings of these adjustments in cognitive flexibility are not well understood. Here, we recorded 64-channel EEG measures of electrical brain activity as participants switched between letter and digit categorization tasks in varying PS contexts, from which we extracted ERPs elicited by the task cue and alpha power differences during the cue-to-target interval and the resting precue period. The temporal resolution of the EEG allowed us to test whether contextual adjustments in cognitive flexibility are mediated by tonic changes in processing mode or by changes in phasic, task cue-triggered processes. We observed reliable modulation of behavioral switch cost by PS context that was mirrored in both cue-evoked ERP and time–frequency effects but not by blockwide precue EEG changes. These results indicate that different levels of cognitive flexibility are instantiated after the presentation of task cues, rather than by being maintained as a tonic state throughout low- or high-switch contexts.


2022 ◽  
pp. 1-16
Author(s):  
Jamal A. Williams ◽  
Elizabeth H. Margulis ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Uri Hasson ◽  
...  

Abstract Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.


2022 ◽  
pp. 1-12
Author(s):  
Simon Kwon ◽  
Franziska R. Richter ◽  
Michael J. Siena ◽  
Jon S. Simons

Abstract The qualities of remembered experiences are often used to inform “reality monitoring” judgments, our ability to distinguish real and imagined events [Johnson, M. K., & Raye, C. L. Reality monitoring. Psychological Review, 88, 67–85, 1981]. Previous experiments have tended to investigate only whether reality monitoring decisions are accurate or not, providing little insight into the extent to which reality monitoring may be affected by qualities of the underlying mnemonic representations. We used a continuous-response memory precision task to measure the quality of remembered experiences that underlie two different types of reality monitoring decisions: self/experimenter decisions that distinguish actions performed by participants and the experimenter and imagined/perceived decisions that distinguish imagined and perceived experiences. The data revealed memory precision to be associated with higher accuracy in both self/experimenter and imagined/perceived reality monitoring decisions, with lower precision linked with a tendency to misattribute self-generated experiences to external sources. We then sought to investigate the possible neurocognitive basis of these observed associations by applying brain stimulation to a region that has been implicated in precise recollection of personal events, the left angular gyrus. Stimulation of angular gyrus selectively reduced the association between memory precision and self-referential reality monitoring decisions, relative to control site stimulation. The angular gyrus may, therefore, be important for the mnemonic processes involved in representing remembered experiences that give rise to a sense of self-agency, a key component of “autonoetic consciousness” that characterizes episodic memory [Tulving, E. Elements of episodic memory. Oxford, United Kingdom: Oxford University Press, 1985].


2021 ◽  
pp. 1-14
Author(s):  
Assaf Harel ◽  
Jeffery D. Nador ◽  
Michael F. Bonner ◽  
Russell A. Epstein

Abstract Scene perception and spatial navigation are interdependent cognitive functions, and there is increasing evidence that cortical areas that process perceptual scene properties also carry information about the potential for navigation in the environment (navigational affordances). However, the temporal stages by which visual information is transformed into navigationally relevant information are not yet known. We hypothesized that navigational affordances are encoded during perceptual processing and therefore should modulate early visually evoked ERPs, especially the scene-selective P2 component. To test this idea, we recorded ERPs from participants while they passively viewed computer-generated room scenes matched in visual complexity. By simply changing the number of doors (no doors, 1 door, 2 doors, 3 doors), we were able to systematically vary the number of pathways that afford movement in the local environment, while keeping the overall size and shape of the environment constant. We found that rooms with no doors evoked a higher P2 response than rooms with three doors, consistent with prior research reporting higher P2 amplitude to closed relative to open scenes. Moreover, we found P2 amplitude scaled linearly with the number of doors in the scenes. Navigability effects on the ERP waveform were also observed in a multivariate analysis, which showed significant decoding of the number of doors and their location at earlier time windows. Together, our results suggest that navigational affordances are represented in the early stages of scene perception. This complements research showing that the occipital place area automatically encodes the structure of navigable space and strengthens the link between scene perception and navigation.


2021 ◽  
pp. 1-19
Author(s):  
Wim Strijbosch ◽  
Edward A. Vessel ◽  
Dominik Welke ◽  
Ondrej Mitas ◽  
John Gelissen ◽  
...  

Abstract Aesthetic experiences have an influence on many aspects of life. Interest in the neural basis of aesthetic experiences has grown rapidly in the past decade, and fMRI studies have identified several brain systems supporting aesthetic experiences. Work on the rapid neuronal dynamics of aesthetic experience, however, is relatively scarce. This study adds to this field by investigating the experience of being aesthetically moved by means of ERP and time–frequency analysis. Participants' electroencephalography (EEG) was recorded while they viewed a diverse set of artworks and evaluated the extent to which these artworks moved them. Results show that being aesthetically moved is associated with a sustained increase in gamma activity over centroparietal regions. In addition, alpha power over right frontocentral regions was reduced in high- and low-moving images, compared to artworks given intermediate ratings. We interpret the gamma effect as an indication for sustained savoring processes for aesthetically moving artworks compared to aesthetically less-moving artworks. The alpha effect is interpreted as an indication of increased attention for aesthetically salient images. In contrast to previous works, we observed no significant effects in any of the established ERP components, but we did observe effects at latencies longer than 1 sec. We conclude that EEG time–frequency analysis provides useful information on the neuronal dynamics of aesthetic experience.


2021 ◽  
pp. 1-14
Author(s):  
Octave Etard ◽  
Rémy Ben Messaoud ◽  
Gabriel Gaugain ◽  
Tobias Reichenbach

Abstract Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.


2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


2021 ◽  
pp. 1-19
Author(s):  
Johanna Kreither ◽  
Orestis Papaioannou ◽  
Steven J. Luck

Abstract Working memory is thought to serve as a buffer for ongoing cognitive operations, even in tasks that have no obvious memory requirements. This conceptualization has been supported by dual-task experiments, in which interference is observed between a primary task involving short-term memory storage and a secondary task that presumably requires the same buffer as the primary task. Little or no interference is typically observed when the secondary task is very simple. Here, we test the hypothesis that even very simple tasks require the working memory buffer, but interference can be minimized by using activity-silent representations to store the information from the primary task. We tested this hypothesis using dual-task paradigm in which a simple discrimination task was interposed in the retention interval of a change detection task. We used contralateral delay activity (CDA) to track the active maintenance of information for the change detection task. We found that the CDA was massively disrupted after the interposed task. Despite this disruption of active maintenance, we found that performance in the change detection task was only slightly impaired, suggesting that activity-silent representations were used to retain the information for the change detection task. A second experiment replicated this result and also showed that automated discriminations could be performed without producing a large CDA disruption. Together, these results suggest that simple but non-automated discrimination tasks require the same processes that underlie active maintenance of information in working memory.


2021 ◽  
pp. 1-20
Author(s):  
Shannon L. M. Heald ◽  
Stephen C. Van Hedger ◽  
John Veillette ◽  
Katherine Reis ◽  
Joel S. Snyder ◽  
...  

Abstract The ability to generalize across specific experiences is vital for the recognition of new patterns, especially in speech perception considering acoustic–phonetic pattern variability. Indeed, behavioral research has demonstrated that listeners are able via a process of generalized learning to leverage their experiences of past words said by difficult-to-understand talker to improve their understanding for new words said by that talker. Here, we examine differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker. Using a pretest–posttest design with EEG, participants were trained using either (1) a large inventory of words where no words were repeated across the experiment (generalized learning) or (2) a small inventory of words where words were repeated (rote learning). Analysis of long-latency auditory evoked potentials at pretest and posttest revealed that rote and generalized learning both produced rapid changes in auditory processing, yet the nature of these changes differed. Generalized learning was marked by an amplitude reduction in the N1–P2 complex and by the presence of a late negativity wave in the auditory evoked potential following training; rote learning was marked only by temporally later scalp topography differences. The early N1–P2 change, found only for generalized learning, is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to selectively modify early auditory processing sensitivity.


2021 ◽  
pp. 1-18
Author(s):  
Samuel D. McDougle ◽  
Sarah A. Wilterson ◽  
Nicholas B. Turk-Browne ◽  
Jordan A. Taylor

Abstract Classic taxonomies of memory distinguish explicit and implicit memory systems, placing motor skills squarely in the latter branch. This assertion is in part a consequence of foundational discoveries showing significant motor learning in amnesics. Those findings suggest that declarative memory processes in the medial temporal lobe (MTL) do not contribute to motor learning. Here, we revisit this issue, testing an individual (L. S. J.) with severe MTL damage on four motor learning tasks and comparing her performance to age-matched controls. Consistent with previous findings in amnesics, we observed that L. S. J. could improve motor performance despite having significantly impaired declarative memory. However, she tended to perform poorly relative to age-matched controls, with deficits apparently related to flexible action selection. Further supporting an action selection deficit, L. S. J. fully failed to learn a task that required the acquisition of arbitrary action–outcome associations. We thus propose a modest revision to the classic taxonomic model: Although MTL-dependent memory processes are not necessary for some motor learning to occur, they play a significant role in the acquisition, implementation, and retrieval of action selection strategies. These findings have implications for our understanding of the neural correlates of motor learning, the psychological mechanisms of skill, and the theory of multiple memory systems.


Sign in / Sign up

Export Citation Format

Share Document