Toward a Computational Understanding of Neuroaesthetics

2021 ◽  
pp. 127-131
Author(s):  
Kiyohito Iigaya ◽  
John P. O’Doherty

Among the most challenging questions in the field of neuroaesthetics concerns how a piece of art comes to be liked in the first place. That is, how can the brain rapidly process a stimulus to form an aesthetic judgment even for stimuli never before encountered? In the article under discussion in this chapter, by leveraging computational methods in combination with behavioral and neuroimaging experiments the authors show that the brain does this by breaking a visual stimulus down into underlying features or attributes. These features are shared across objects, and weights over these features are integrated over to produce aesthetic judgments. This process is structured hierarchically in which elementary statistical properties of an image are combined to generate higher level features which in turn yield aesthetic value. Neuroimaging supports the implementation of this hierarchical integration along a gradient from early to higher order visual cortex extending into association cortex and ultimately converging in the anterior medial prefrontal cortex.

2020 ◽  
Author(s):  
Yaelan Jung ◽  
Dirk B. Walther

AbstractNatural scenes deliver rich sensory information about the world. Decades of research has shown that the scene-selective network in the visual cortex represents various aspects of scenes. It is, however, unknown how such complex scene information is processed beyond the visual cortex, such as in the prefrontal cortex. It is also unknown how task context impacts the process of scene perception, modulating which scene content is represented in the brain. In this study, we investigate these questions using scene images from four natural scene categories, which also depict two types of global scene properties, temperature (warm or cold), and sound-level (noisy or quiet). A group of healthy human subjects from both sexes participated in the present study using fMRI. In the study, participants viewed scene images under two different task conditions; temperature judgment and sound-level judgment. We analyzed how different scene attributes (scene categories, temperature, and sound-level information) are represented across the brain under these task conditions. Our findings show that global scene properties are only represented in the brain, especially in the prefrontal cortex, when they are task-relevant. However, scene categories are represented in the brain, in both the parahippocampal place area and the prefrontal cortex, regardless of task context. These findings suggest that the prefrontal cortex selectively represents scene content according to task demands, but this task selectivity depends on the types of scene content; task modulates neural representations of global scene properties but not of scene categories.


2021 ◽  
pp. 372-419
Author(s):  
Richard E. Passingham

This chapter and the next one consider how to account for the astonishing difference in intelligence between humans and our nearest living ancestors, the great apes. An integrated system that includes the dorsal prefrontal cortex and the parietal association cortex is activated when subjects attempt tests of non-verbal intelligence. It has been suggested that this system might act as a ‘multiple-demand system’ or ‘global workspace’ that can deal with any problem. However, closer examination suggests that the tasks used to support this claim have in common that they involve abstract sequences. These problems can be solved by visual imagery alone. But humans also have the advantage that they also have access to a propositional code. This means that they can solve problems that involve verbal reasoning, as well as being able to form detailed plans for the future. They can also form explicit judgements about themselves, including their perceptions, actions, and memories, and this means that they can represent themselves as individuals. The representation of the self depends in part on tissue in the medial prefrontal cortex (PF).


1999 ◽  
Author(s):  
Laura Sanchez-Huerta ◽  
Adan Hernandez ◽  
Griselda Ayala ◽  
Javier Marroquin ◽  
Adriana B. Silva ◽  
...  

2021 ◽  
Author(s):  
Mengyao Zheng ◽  
Jinghong Xu ◽  
Les Keniston ◽  
Jing Wu ◽  
Song Chang ◽  
...  

Abstract Cross-modal interaction (CMI) could significantly influence the perceptional or decision-making process in many circumstances. However, it remains poorly understood what integrative strategies are employed by the brain to deal with different task contexts. To explore it, we examined neural activities of the medial prefrontal cortex (mPFC) of rats performing cue-guided two-alternative forced-choice tasks. In a task requiring rats to discriminate stimuli based on auditory cue, the simultaneous presentation of an uninformative visual cue substantially strengthened mPFC neurons' capability of auditory discrimination mainly through enhancing the response to the preferred cue. Doing this also increased the number of neurons revealing a cue preference. If the task was changed slightly and a visual cue, like the auditory, denoted a specific behavioral direction, mPFC neurons frequently showed a different CMI pattern with an effect of cross-modal enhancement best evoked in information-congruent multisensory trials. In a choice free task, however, the majority of neurons failed to show a cross-modal enhancement effect and cue preference. These results indicate that CMI at the neuronal level is context-dependent in a way that differs from what has been shown in previous studies.


2019 ◽  
Author(s):  
Marlieke T.R. van Kesteren ◽  
Paul Rignanese ◽  
Pierre G. Gianferrara ◽  
Lydia Krabbendam ◽  
Martijn Meeter

AbstractBuilding consistent knowledge schemas that organize information and guide future learning is of great importance in everyday life. Such knowledge building is suggested to occur through reinstatement of prior knowledge during new learning in stimulus-specific brain regions. This process is proposed to yield integration of new with old memories, supported by the medial prefrontal cortex (mPFC) and medial temporal lobe (MTL). Possibly as a consequence, congruency of new information with prior knowledge is known to enhance subsequent memory. Yet, it is unknown how reactivation and congruency interact to optimize memory integration processes that lead to knowledge schemas. To investigate this question, we here used an adapted AB-AC inference paradigm in combination with functional Magnetic Resonance Imaging (fMRI). Participants first studied an AB-association followed by an AC-association, so B (a scene) and C (an object) were indirectly linked through their common association with A (an unknown pseudoword). BC-associations were either congruent or incongruent with prior knowledge (e.g. a bathduck or a hammer in a bathroom), and participants were asked to report subjective reactivation strength for B while learning AC. Behaviorally, both the congruency and reactivation measures enhanced memory integration. In the brain, these behavioral effects related to univariate and multivariate parametric effects of congruency and reactivation on activity patterns in the MTL, mPFC, and Parahippocampal Place Area (PPA). Moreover, mPFC exhibited larger connectivity with the PPA for more congruent associations. These outcomes provide insights into the neural mechanisms underlying memory integration enhancement, which can be important for educational learning.Significance statementHow does our brain build knowledge through integrating information that is learned at different periods in time? This question is important in everyday learning situations such as educational settings. Using an inference paradigm, we here set out to investigate how congruency with, and active reactivation of previously learned information affects memory integration processes in the brain. Both these factors were found to relate to activity in memory-related regions such as the medial prefrontal cortex (mPFC) and the hippocampus. Moreover, activity in the parahippocampal place area (PPA), assumed to reflect reinstatement of the previously learned associate, was found to predict subjective reactivation strength. These results show how we can moderate memory integration processes to enhance subsequent knowledge building.


2021 ◽  
Author(s):  
John Philippe Paulus ◽  
Carlo Vignali ◽  
Marc N Coutanche

Associative inference, the process of drawing novel links between existing knowledge to rapidly integrate associated information, is supported by the hippocampus and neocortex. Within the neocortex, the medial prefrontal cortex (mPFC) has been implicated in the rapid cortical learning of new information that is congruent with an existing framework of knowledge, or schema. How the brain integrates associations to form inferences, specifically how inferences are represented, is not well understood. In this study, we investigate how the brain uses schemas to facilitate memory integration in an associative inference paradigm (A-B-C-D). We conducted two event-related fMRI experiments in which participants retrieved previously learned direct (AB, BC, CD) and inferred (AC, AD) associations between word pairs for items that are schema congruent or incongruent. Additionally, we investigated how two factors known to affect memory, a delay with sleep, and reward, modulate the neural integration of associations within, and between, schema. Schema congruency was found to benefit the integration of associates, but only when retrieval immediately follows learning. RSA revealed that neural patterns of inferred pairs (AC) in the PHc, mPFC, and posHPC were more similar to their constituents (AB and BC) when the items were schema congruent, suggesting that schema facilitates the assimilation of paired items into a single inferred unit containing all associated elements. Furthermore, a delay with sleep, but not reward, impacted the assimilation of inferred pairs. Our findings reveal that the neural representations of overlapping associations are integrated into novel representations through the support of memory schema.


Author(s):  
Richard Johnston ◽  
Adam C. Snyder ◽  
Sanjeev B. Khanna ◽  
Deepa Issar ◽  
Matthew A. Smith

SummaryDecades of research have shown that global brain states such as arousal can be indexed by measuring the properties of the eyes. Neural signals from individual neurons, populations of neurons, and field potentials measured throughout much of the brain have been associated with the size of the pupil, small fixational eye movements, and vigor in saccadic eye movements. However, precisely because the eyes have been associated with modulation of neural activity across the brain, and many different kinds of measurements of the eyes have been made across studies, it has been difficult to clearly isolate how internal states affect the behavior of the eyes, and vice versa. Recent work in our laboratory identified a latent dimension of neural activity in macaque visual cortex on the timescale of minutes to tens of minutes. This ‘slow drift’ was associated with perceptual performance on an orientation-change detection task, as well as neural activity in visual and prefrontal cortex (PFC), suggesting it might reflect a shift in a global brain state. This motivated us to ask if the neural signature of this internal state is correlated with the action of the eyes in different behavioral tasks. We recorded from visual cortex (V4) while monkeys performed a change detection task, and the prefrontal cortex, while they performed a memory-guided saccade task. On both tasks, slow drift was associated with a pattern that is indicative of changes in arousal level over time. When pupil size was large, and the subjects were in a heighted state of arousal, microsaccade rate and reaction time decreased while saccade velocity increased. These results show that the action of the eyes is associated with a dominant mode of neural activity that is pervasive and task-independent, and can be accessed in the population activity of neurons across the cortex.


2021 ◽  
Vol 14 ◽  
Author(s):  
Jun Fan ◽  
Qiu-Ling Zhong ◽  
Ran Mo ◽  
Cheng-Lin Lu ◽  
Jing Ren ◽  
...  

The medial prefrontal cortex (mPFC), a key part of the brain networks that are closely related to the regulation of behavior, acts as a key regulator in emotion, social cognition, and decision making. Astrocytes are the majority cell type of glial cells, which play a significant role in a number of processes and establish a suitable environment for the functioning of neurons, including the brain energy metabolism. Astrocyte’s dysfunction in the mPFC has been implicated in various neuropsychiatric disorders. Glucose is a major energy source in the brain. In glucose metabolism, part of glucose is used to convert UDP-GlcNAc as a donor molecule for O-GlcNAcylation, which is controlled by a group of enzymes, O-GlcNAc transferase enzyme (OGT), and O-GlcNAcase (OGA). However, the role of O-GlcNAcylation in astrocytes is almost completely unknown. Our research showed that astrocytic OGT could influence the expression of proteins in the mPFC. Most of these altered proteins participate in metabolic processes, transferase activity, and biosynthetic processes. GFAP, an astrocyte maker, was increased after OGT deletion. These results provide a framework for further study on the role of astrocytic OGT/O-GlcNAcylation in the mPFC.


NeuroImage ◽  
2012 ◽  
Vol 62 (1) ◽  
pp. 102-112 ◽  
Author(s):  
Claudia Civai ◽  
Cristiano Crescentini ◽  
Aldo Rustichini ◽  
Raffaella Ida Rumiati

Sign in / Sign up

Export Citation Format

Share Document