scholarly journals An ALE meta-analytic review of top-down and bottom-up processing of music in the brain

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Victor Pando-Naude ◽  
Agata Patyczek ◽  
Leonardo Bonetti ◽  
Peter Vuust

AbstractA remarkable feature of the human brain is its ability to integrate information from the environment with internally generated content. The integration of top-down and bottom-up processes during complex multi-modal human activities, however, is yet to be fully understood. Music provides an excellent model for understanding this since music listening leads to the urge to move, and music making entails both playing and listening at the same time (i.e., audio-motor coupling). Here, we conducted activation likelihood estimation (ALE) meta-analyses of 130 neuroimaging studies of music perception, production and imagery, with 2660 foci, 139 experiments, and 2516 participants. We found that music perception and production rely on auditory cortices and sensorimotor cortices, while music imagery recruits distinct parietal regions. This indicates that the brain requires different structures to process similar information which is made available either by an interaction with the environment (i.e., bottom-up) or by internally generated content (i.e., top-down).

2021 ◽  
Author(s):  
Victor Pando-Naude ◽  
Agata Patyczek ◽  
Leonardo Bonetti ◽  
Peter Vuust

AbstractThe most remarkable feature of the human brain is perhaps its ability to constantly integrate information from the environment with internal representations to decide the best action. The integration of top-down and bottom-up processes during complex multi-modal human activities, however, is yet to be understood. Music provides an excellent model for understanding this since music listening leads to the urge to move, and music making entails both playing and listening at the same time (i.e. audio-motor coupling). Here, we conducted activation likelihood estimation (ALE) meta-analyses of 130 neuroimaging studies of music perception, production and imagery, with 2660 foci, 139 experiments, and 2516 participants. We found that music perception relies on auditory cortices, music production involves sensorimotor cortices, and music imagery recruits cingulum. This indicates that the brain requires distinct structures to process information which is made available either by the environment (i.e. bottom-up, music perception) or by internal representations (i.e. top-down, music production and imagery).


2001 ◽  
Vol 39 (2-3) ◽  
pp. 137-150 ◽  
Author(s):  
S Karakaş ◽  
C Başar-Eroğlu ◽  
Ç Özesmi ◽  
H Kafadar ◽  
Ö.Ü Erzengin
Keyword(s):  
Top Down ◽  

2020 ◽  
Author(s):  
Sotaro Kondoh ◽  
Kazuo Okanoya ◽  
Ryosuke O Tachibana

Meter is one of the core features of music perception. It is the cognitive grouping of regular sound sequences, typically for every 2, 3, or 4 beats. Previous studies have suggested that one can not only passively perceive the meter from acoustic cues such as loudness, pitch, and duration of sound elements, but also actively perceive it by paying attention to isochronous sound events without any acoustic cues. Studying the interaction of top-down and bottom-up processing in meter perception leads to understanding the cognitive system’s ability to perceive the entire structure of music. The present study aimed to demonstrate that meter perception requires the top-down process (which maintains and switches attention between cues) as well as the bottom-up process for discriminating acoustic cues. We created a “biphasic” sound stimulus, which consists of successive tone sequences designed to provide cues for both the triple and quadruple meters in different sound attributes, frequency, and duration, and measured how participants perceived meters from the stimulus in a five-point scale (ranged from “strongly triple” to “strongly quadruple”). Participants were asked to focus on differences in frequency and duration. We found that well-trained participants perceived different meters by switching their attention to specific cues, while untrained participants did not. This result provides evidence for the idea that meter perception involves the interaction between top-down and bottom-up processes, which training can facilitate.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256712
Author(s):  
Sotaro Kondoh ◽  
Kazuo Okanoya ◽  
Ryosuke O. Tachibana

Meter is one of the core features of music perception. It is the cognitive grouping of regular sound sequences, typically for every 2, 3, or 4 beats. Previous studies have suggested that one can not only passively perceive the meter from acoustic cues such as loudness, pitch, and duration of sound elements, but also actively perceive it by paying attention to isochronous sound events without any acoustic cues. Studying the interaction of top-down and bottom-up processing in meter perception leads to understanding the cognitive system’s ability to perceive the entire structure of music. The present study aimed to demonstrate that meter perception requires the top-down process (which maintains and switches attention between cues) as well as the bottom-up process for discriminating acoustic cues. We created a “biphasic” sound stimulus, which consists of successive tone sequences designed to provide cues for both the triple and quadruple meters in different sound attributes, frequency, and duration. Participants were asked to focus on either frequency or duration of the stimulus, and to answer how they perceived meters on a five-point scale (ranged from “strongly triple” to “strongly quadruple”). As a result, we found that participants perceived different meters by switching their attention to specific cues. This result adds evidence to the idea that meter perception involves the interaction between top-down and bottom-up processes.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


2013 ◽  
Vol 09 (02) ◽  
pp. 1350010 ◽  
Author(s):  
MATTEO CACCIOLA ◽  
GIANLUIGI OCCHIUTO ◽  
FRANCESCO CARLO MORABITO

Many computer vision problems consist of making a suitable content description of images usually aiming to extract the relevant information content. In case of images representing paintings or artworks, the information extracted is rather subject-dependent, thus escaping any universal quantification. However, we proposed a measure of complexity of such kinds of oeuvres which is related to brain processing. The artistic complexity measures the brain inability to categorize complex nonsense forms represented in modern art, in a dynamic process of acquisition that most involves top-down mechanisms. Here, we compare the quantitative results of our analysis on a wide set of paintings of various artists to the cues extracted from a standard bottom-up approach based on visual saliency concept. In every painting inspection, the brain searches for more informative areas at different scales, then connecting them in an attempt to capture the full impact of information content. Artistic complexity is able to quantify information which might have been individually lost in the fruition of a human observer thus identifying the artistic hand. Visual saliency highlights the most salient areas of the paintings standing out from their neighbours and grabbing our attention. Nevertheless, we will show that a comparison on the ways the two algorithms act, may manifest some interesting links, finally indicating an interplay between bottom-up and top-down modalities.


Author(s):  
Mariana von Mohr ◽  
Aikaterini Fotopoulou

Pain and pleasant touch have been recently classified as interoceptive modalities. This reclassification lies at the heart of long-standing debates questioning whether these modalities should be defined as sensations on their basis of neurophysiological specificity at the periphery or as homeostatic emotions on the basis of top-down convergence and modulation at the spinal and brain levels. Here, we outline the literature on the peripheral and central neurophysiology of pain and pleasant touch. We next recast this literature within a recent Bayesian predictive coding framework, namely active inference. This recasting puts forward a unifying model of bottom-up and top-down determinants of pain and pleasant touch and the role of social factors in modulating the salience of peripheral signals reaching the brain.


2021 ◽  
Vol 150 (4) ◽  
pp. A182-A182
Author(s):  
Lalitta Suriya-Arunroj ◽  
Joshua I. Gold ◽  
Yale E. Cohen
Keyword(s):  
Top Down ◽  

2019 ◽  
Author(s):  
Pantelis Leptourgos ◽  
Charles-Edouard Notredame ◽  
Marion Eck ◽  
Renaud Jardri ◽  
Sophie Denève

AbstractWhen facing fully ambiguous images, the brain cannot commit to a single percept and instead switches between mutually exclusive interpretations every few seconds, a phenomenon known as bistable perception. Despite years of research, there is still no consensus on whether bistability, and perception in general, is driven primarily by bottom-up or top-down mechanisms. Here, we adopted a Bayesian approach in an effort to reconcile these two theories. Fifty-five healthy participants were exposed to an adaptation of the Necker cube paradigm, in which we manipulated sensory evidence (by shadowing the cube) and prior knowledge (e.g., by varying instructions about what participants should expect to see). We found that manipulations of both sensory evidence and priors significantly affected the way participants perceived the Necker cube. However, we observed an interaction between the effect of the cue and the effect of the instructions, a finding incompatible with Bayes-optimal integration. In contrast, the data were well predicted by a circular inference model. In this model, ambiguous sensory evidence is systematically biased in the direction of current expectations, ultimately resulting in a bistable percept.


Sign in / Sign up

Export Citation Format

Share Document