scholarly journals High-Order Areas and Auditory Cortex Both Represent the High-Level Event Structure of Music

2022 ◽  
pp. 1-16
Author(s):  
Jamal A. Williams ◽  
Elizabeth H. Margulis ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Uri Hasson ◽  
...  

Abstract Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.

2021 ◽  
Author(s):  
Jamal A. Williams ◽  
Elizabeth H. Margulis ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Uri Hasson ◽  
...  

AbstractRecent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts, and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, mPFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.Significance StatementListening to music requires the brain to track dynamics at multiple hierarchical timescales. In our study, we had fMRI participants listen to real-world music (classical and jazz pieces) and then used an unsupervised learning algorithm (a hidden Markov model) to model the high-level event structure of music within participants’ brain data. This approach revealed that default mode brain regions involved in representing the high-level event structure of narratives are also involved in representing the high-level event structure of music. These findings provide converging support for the hypothesis that these regions play a domain-general role in processing stimuli with long-timescale dependencies.


2016 ◽  
Author(s):  
Janice Chen ◽  
Yuan Chang Leong ◽  
Kenneth A Norman ◽  
Uri Hasson

Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.


Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2019 ◽  
Vol 122 (6) ◽  
pp. 2568-2575
Author(s):  
Zixin Yong ◽  
Joo Huang Tan ◽  
Po-Jang Hsieh

Microsleeps are brief episodes of arousal level decrease manifested through behavioral signs. Brain activity during microsleep in the presence of external stimulus remains poorly understood. In this study, we sought to understand neural responses to auditory stimulation during microsleep. We gave participants the simple task of listening to audios of different pitches and amplitude modulation frequencies during early afternoon functional MRI scans. We found the following: 1) microsleep was associated with cortical activations in broad motor and sensory regions and deactivations in thalamus, irrespective of auditory stimulation; 2) high and low pitch audios elicited different activity patterns in the auditory cortex during awake but not microsleep state; and 3) during microsleep, spatial activity patterns in broad brain regions were similar regardless of the presence or types of auditory stimulus (i.e., stimulus invariant). These findings show that the brain is highly active during microsleep but the activity patterns across broad regions are unperturbed by auditory inputs. NEW & NOTEWORTHY During deep drowsy states, auditory inputs could induce activations in the auditory cortex, but the activation patterns lose differentiation to high/low pitch stimuli. Instead of random activations, activity patterns across the brain during microsleep appear to be structured and may reflect underlying neurophysiological processes that remain unclear.


2020 ◽  
Vol 30 (11) ◽  
pp. 5915-5929 ◽  
Author(s):  
Tanya Wen ◽  
Daniel J Mitchell ◽  
John Duncan

Abstract The default mode network (DMN) is engaged in a variety of cognitive settings, including social, semantic, temporal, spatial, and self-related tasks. Andrews-Hanna et al. (2010; Andrews-Hanna 2012) proposed that the DMN consists of three distinct functional–anatomical subsystems—a dorsal medial prefrontal cortex (dMPFC) subsystem that supports social cognition; a medial temporal lobe (MTL) subsystem that contributes to memory-based scene construction; and a set of midline core hubs that are especially involved in processing self-referential information. We examined activity in the DMN subsystems during six different tasks: 1) theory of mind, 2) moral dilemmas, 3) autobiographical memory, 4) spatial navigation, 5) self/other adjective judgment, and 6) a rest condition. At a broad level, we observed similar whole-brain activity maps for the six contrasts, and some response to every contrast in each of the three subsystems. In more detail, both univariate analysis and multivariate activity patterns showed partial functional separation, especially between dMPFC and MTL subsystems, though with less support for common activity across the midline core. Integrating social, spatial, self-related, and other aspects of a cognitive situation or episode, multiple components of the DMN may work closely together to provide the broad context for current mental activity.


2015 ◽  
Vol 27 (7) ◽  
pp. 1376-1387 ◽  
Author(s):  
Jessica Bulthé ◽  
Bert De Smedt ◽  
Hans P. Op de Beeck

In numerical cognition, there is a well-known but contested hypothesis that proposes an abstract representation of numerical magnitude in human intraparietal sulcus (IPS). On the other hand, researchers of object cognition have suggested another hypothesis for brain activity in IPS during the processing of number, namely that this activity simply correlates with the number of visual objects or units that are perceived. We contrasted these two accounts by analyzing multivoxel activity patterns elicited by dot patterns and Arabic digits of different magnitudes while participants were explicitly processing the represented numerical magnitude. The activity pattern elicited by the digit “8” was more similar to the activity pattern elicited by one dot (with which the digit shares the number of visual units but not the magnitude) compared to the activity pattern elicited by eight dots, with which the digit shares the represented abstract numerical magnitude. A multivoxel pattern classifier trained to differentiate one dot from eight dots classified all Arabic digits in the one-dot pattern category, irrespective of the numerical magnitude symbolized by the digit. These results were consistently obtained for different digits in IPS, its subregions, and many other brain regions. As predicted from object cognition theories, the number of presented visual units forms the link between the parietal activation elicited by symbolic and nonsymbolic numbers. The current study is difficult to reconcile with the hypothesis that parietal activation elicited by numbers would reflect a format-independent representation of number.


2020 ◽  
Author(s):  
Marielle Greber ◽  
Carina Klein ◽  
Simon Leipold ◽  
Silvano Sele ◽  
Lutz Jäncke

AbstractThe neural basis of absolute pitch (AP), the ability to effortlessly identify a musical tone without an external reference, is poorly understood. One of the key questions is whether perceptual or cognitive processes underlie the phenomenon as both sensory and higher-order brain regions have been associated with AP. One approach to elucidate the neural underpinnings of a specific expertise is the examination of resting-state networks.Thus, in this paper, we report a comprehensive functional network analysis of intracranial resting-state EEG data in a large sample of AP musicians (n = 54) and non-AP musicians (n = 51). We adopted two analysis approaches: First, we applied an ROI-based analysis to examine the connectivity between the auditory cortex and the dorsolateral prefrontal cortex (DLPFC) using several established functional connectivity measures. This analysis is a replication of a previous study which reported increased connectivity between these two regions in AP musicians. Second, we performed a whole-brain network-based analysis on the same functional connectivity measures to gain a more complete picture of the brain regions involved in a possibly large-scale network supporting AP ability.In our sample, the ROI-based analysis did not provide evidence for an AP-specific connectivity increase between the auditory cortex and the DLPFC. In contrast, the whole-brain analysis revealed three networks with increased connectivity in AP musicians comprising nodes in frontal, temporal, subcortical, and occipital areas. Commonalities of the networks were found in both sensory and higher-order brain regions of the perisylvian area. Further research will be needed to confirm these exploratory results.


2018 ◽  
Author(s):  
Juan Linde-Domingo ◽  
Matthias S. Treder ◽  
Casper Kerren ◽  
Maria Wimber

AbstractRemembering is a reconstructive process. Surprisingly little is known about how the reconstruction of a memory unfolds in time in the human brain. We used reaction times and EEG time-series decoding to test the hypothesis that the information flow is reversed when an event is reconstructed from memory, compared to when the same event is initially being perceived. Across three experiments, we found highly consistent evidence supporting such a reversed stream. When seeing an object, low-level perceptual features were discriminated faster behaviourally, and could be decoded from brain activity earlier, than high-level conceptual features. This pattern reversed during associative memory recall, with reaction times and brain activity patterns now indicating that conceptual information was reconstructed more rapidly than perceptual details. Our findings support a neurobiologically plausible model of human memory, suggesting that memory retrieval is a hierarchical, multi-layered process that prioritizes semantically meaningful information over perceptual detail.


2018 ◽  
Author(s):  
Christopher Baldassano ◽  
Uri Hasson ◽  
Kenneth A. Norman

AbstractUnderstanding movies and stories requires maintaining a high-level situation model that abstracts away from perceptual details to describe the location, characters, actions, and causal relationships of the currently unfolding event. These models are built not only from information present in the current narrative, but also from prior knowledge about schematic event scripts, which describe typical event sequences encountered throughout a lifetime. We analyzed fMRI data from 44 human subjects presented with sixteen three-minute stories, consisting of four schematic events drawn from two different scripts (eating at a restaurant or going through the airport). Aside from this shared script structure, the stories varied widely in terms of their characters and storylines, and were presented in two highly dissimilar formats (audiovisual clips or spoken narration). One group was presented with the stories in an intact temporal sequence, while a separate control group was presented with the same events in scrambled order. Regions including the posterior medial cortex, medial prefrontal cortex (mPFC), and superior frontal gyrus exhibited schematic event patterns that generalized across stories, subjects, and modalities. Patterns in mPFC were also sensitive to overall script structure, with temporally scrambled events evoking weaker schematic representations. Using a Hidden Markov Model, patterns in these regions can predict the script (restaurant vs. airport) of unlabeled data with high accuracy, and can be used to temporally align multiple stories with a shared script. These results extend work on the perception of controlled, artificial schemas in human and animal experiments to naturalistic perception of complex narrative stimuli.Significance StatementIn almost all situations we encounter in our daily lives, we are able to draw on our schematic knowledge about what typically happens in the world to better perceive and mentally represent our ongoing experiences. In contrast to previous studies that investigated schematic cognition using simple, artificial associations, we measured brain activity from subjects watching movies and listening to stories depicting restaurant or airport experiences. Our results reveal a network of brain regions that is sensitive to the shared temporal structure of these naturalistic situations. These regions abstract away from the particular details of each story, activating a representation of the general type of situation being perceived.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lucy L. W. Owen ◽  
Thomas H. Chang ◽  
Jeremy R. Manning

AbstractOur thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain’s functional connectome that display homologous lower-level dynamic correlations. Here we test the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We develop an approach to estimating high-order dynamic correlations in timeseries data, and we apply the approach to neuroimaging data collected as human participants either listen to a ten-minute story or listen to a temporally scrambled version of the story. We train across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We find that classifiers trained to decode from high-order dynamic correlations yield the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.


Sign in / Sign up

Export Citation Format

Share Document