scholarly journals Emotional representations of space vary as a function of peoples’ affect and interoceptive sensibility

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alejandro Galvez-Pol ◽  
Marcos Nadal ◽  
James M. Kilner

AbstractMost research on people’s representation of space has focused on spatial appraisal and navigation. But there is more to space besides navigation and assessment: people have different emotional experiences at different places, which create emotionally tinged representations of space. Little is known about the emotional representation of space and the factors that shape it. The purpose of this study was to develop a graphic methodology to study the emotional representation of space and some of the environmental features (non-natural vs. natural) and personal features (affective state and interoceptive sensibility) that modulate it. We gave participants blank maps of the region where they lived and asked them to apply shade where they had happy/sad memories, and where they wanted to go after Covid-19 lockdown. Participants also completed self-reports on affective state and interoceptive sensibility. By adapting methods for analyzing neuroimaging data, we examined shaded pixels to quantify where and how strong emotions are represented in space. The results revealed that happy memories were consistently associated with similar spatial locations. Yet, this mapping response varied as a function of participants’ affective state and interoceptive sensibility. Certain regions were associated with happier memories in participants whose affective state was more positive and interoceptive sensibility was higher. The maps of happy memories, desired locations to visit after lockdown, and regions where participants recalled happier memories as a function of positive affect and interoceptive sensibility overlayed significantly with natural environments. These results suggest that people’s emotional representations of their environment are shaped by the naturalness of places, and by their affective state and interoceptive sensibility.

2021 ◽  
Author(s):  
Alejandro Galvez-Pol ◽  
Marcos Nadal ◽  
James Kilner

As people interact in extensive environments, their space becomes intertwined with emotions. Yet, beyond the study of spatial appraisal and navigation1–3, the emotional representation of space remains elusive. Here we developed a method that, even without mobility (during Covid-19 lockdown), allows examining participants’ emotional representation of space and psychophysiological correlates. We gave participants blank maps of the region where they lived and asked them to apply shade where they had happy/sad memories, and where they wanted to go after the lockdown. They also completed self-reports on mental health and interoceptive awareness (appraisal of inner bodily sensations). By adapting neuroimaging methods, we examined shaded pixels instead of brain voxels to quantify where and how strong emotions are represented in space. The results revealed that happy memories were consistently associated with similar spatial locations. Yet, this mapping response varied as a function of participants’ mental health and interoceptive awareness. Interestingly, maps of happy memories and desired locations after lockdown overlay significantly with natural environments (vs. non-natural). These results suggest that our relationship with the environment relates to how we feel and appraise bodily sensations (i.e., allostasis in space). Our method may provide a spatially ecological marker for physical and mental disorders.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Caitlin S. Mallory ◽  
Kiah Hardcastle ◽  
Malcolm G. Campbell ◽  
Alexander Attinger ◽  
Isabel I. C. Low ◽  
...  

AbstractNeural circuits generate representations of the external world from multiple information streams. The navigation system provides an exceptional lens through which we may gain insights about how such computations are implemented. Neural circuits in the medial temporal lobe construct a map-like representation of space that supports navigation. This computation integrates multiple sensory cues, and, in addition, is thought to require cues related to the individual’s movement through the environment. Here, we identify multiple self-motion signals, related to the position and velocity of the head and eyes, encoded by neurons in a key node of the navigation circuitry of mice, the medial entorhinal cortex (MEC). The representation of these signals is highly integrated with other cues in individual neurons. Such information could be used to compute the allocentric location of landmarks from visual cues and to generate internal representations of space.


Author(s):  
Anjali Sankar ◽  
Cynthia H.Y. Fu

Impairments in processing emotions are a hallmark feature of depression. Advances in neuroimaging techniques have rapidly improved our understanding of the pathophysiology underlying major depression. In this chapter, we provide an overview of influential neural models of emotion perception and regulation and discuss the neurocircuitries of emotion processing that are affected. Major depression is characterized by impairments in widespread brain regions that are evident in the first episode. Models have sought to distinguish the neural circuitry associated with recognition of the emotion, integration of somatic responses, and monitoring of the affective state. In particular, there has been a preponderance of research on the neurocircuitries affected during processing of mood-congruent negative emotional stimuli in depression. While neuroimaging correlates have been investigated and models proposed, these findings have had limited clinical applicability to date. Novel methods such as multivariate pattern recognition applied to neuroimaging data might enable identification of reliable, valid, and robust biomarkers with high predictive accuracy that can be applied to an individual. Last, we discuss avenues for extension and future work.


2009 ◽  
Vol 101 (3) ◽  
pp. 1294-1308 ◽  
Author(s):  
Edmund T. Rolls ◽  
Fabian Grabenhorst ◽  
Leonardo Franco

Decoding and information theoretic techniques were used to analyze the predictions that can be made from functional magnetic resonance neuroimaging data on individual trials. The subjective pleasantness produced by warm and cold applied to the hand could be predicted on single trials with typically in the range 60–80% correct from the activations of groups of voxels in the orbitofrontal and medial prefrontal cortex and pregenual cingulate cortex, and the information available was typically in the range 0.1–0.2 (with a maximum of 0.6) bits. The prediction was typically a little better with multiple voxels than with one voxel, and the information increased sublinearly with the number of voxels up to typically seven voxels. Thus the information from different voxels was not independent, and there was considerable redundancy across voxels. This redundancy was present even when the voxels were from different brain areas. The pairwise stimulus-dependent correlations between voxels, reflecting higher-order interactions, did not encode significant information. For comparison, the activity of a single neuron in the orbitofrontal cortex can predict with 90% correct and encode 0.5 bits of information about whether an affectively positive or negative visual stimulus has been shown, and the information encoded by small numbers of neurons is typically independent. In contrast, the activation of a 3 × 3 × 3-mm voxel reflects the activity of ∼0.8 million neurons or their synaptic inputs and is not part of the information encoding used by the brain, thus providing a relatively poor readout of information compared with that available from small populations of neurons.


2017 ◽  
Author(s):  
Christopher R. Nolan ◽  
J.M.G. Vromen ◽  
Allen Cheung ◽  
Oliver Baumann

AbstractIndividual hippocampal neurons selectively increase their firing rates in specific spatial locations. As a population these neurons provide a decodable representation of space that is robust against changes to sensory- and path-related cues. This neural code is sparse and distributed, theoretically rendering it undetectable with population recording methods such as functional magnetic resonance imaging (fMRI). Existing studies nonetheless report decoding spatial codes in the human hippocampus using such techniques. Here we present results from a virtual navigation experiment in humans in which we eliminated visual- and path-related confounds and statistical shortcomings present in existing studies, ensuring that any positive decoding results would be only spatial in nature and would represent a true voxel-place code. Consistent with theoretical arguments derived from electrophysiological data and contrary to existing fMRI studies, our results show that although participants were fully oriented during the navigation task, there was no statistical evidence for a place code.


2018 ◽  
Vol 24 (81) ◽  
pp. 7-22
Author(s):  
Sead Turčalo ◽  
Ado Kulović

Abstract This research is premised on two theoretical constructs: that maps do not objectively depict space and that traditional cartography produces a geopolitical narrative. The research aim is to investigate geopolitical influence in modern, digital representations of space, and vice versa. This paper is divided into three parts: In the first, the digital representation of space is introduced and explained, and two widely acknowledged digital cartographic services are established as the empirical foundation of the research – Google (Google Maps and Google Earth), designed by cartographic and geo-data professionals, and OpenStreetMap, built through crowdsourcing. In the second part, the geopolitical features of traditional cartography are discussed in the context of digital mapping, including ethnocentricity and hierarchical representations of space, similarities to geopolitische karte, and “minor geopolitics.” The final part asks and answers a key question about geopolitical subjectivity: “Who benefits from the geopolitical narratives in digital representations of space?”


2016 ◽  
Author(s):  
Mohammad-Reza A. Dehaqani ◽  
Abdol-Hossein Vahabie ◽  
Mohammadbagher Parsa ◽  
Behard Noudoost ◽  
Alireza Soltani

AbstractAlthough individual neurons can be highly selective to particular stimuli and certain upcoming actions, they can provide a complex representation of stimuli and actions at the level of population. The ability to dynamically allocate neural resources is crucial for cognitive flexibility. However, it is unclear whether cognitive flexibility emerges from changes in activity at the level of individual neurons, population, or both. By applying a combination of decoding and encoding methods to simultaneously recorded neural data, we show that while maintaining their stimulus selectivity, neurons in prefrontal cortex alter their correlated activity during various cognitive states, resulting in an enhanced representation of visual space. During a task with various cognitive states, individual prefrontal neurons maintained their limited spatial sensitivity between visual encoding and saccadic target selection whereas the population selectively improved its encoding of spatial locations far from the neurons' preferred locations. This 'encoding expansion' relied on high-dimensional neural representations and was accompanied by selective reductions in noise correlation for non-preferred locations. Our results demonstrate that through recruitment of less-informative neurons and reductions of noise correlation in their activity, the representation of space by neuronal ensembles can be dynamically enhanced, and suggest that cognitive flexibility is mainly achieved by changes in neural representation at the level of population of prefrontal neurons rather than individual neurons.


2019 ◽  
Vol 4 (2) ◽  
pp. 289-297
Author(s):  
Chiara Paganini ◽  
Gregory Peterson ◽  
Jacqueline Mills

The research examined the role of an affective state and immediate surrounds as possible antecedents of eating, utilising Ecological Momentary Assessment (EMA), repeated assessments of current psychological and situational states in participants’ natural environments. 136 adults [55 with disordered eating (DE) and 81 controls] were recruited from the community and they completed event-contingent and random assessments over a seven-day period. Psychological and situational variables relative to eating were investigated to test if there was a significant difference in negative affect, hunger levels, time and location. To account for the nesting of multiple categorical observations within subjects, data were analysed using generalised estimating equations and autoregressive correlation, a repeated measure MANOVA and paired-sample t-tests.Levels of guilt and disgust were higher at eating episodes in DE participants and feelings of guilt and dissatisfaction with self were higher after eating. Being at home and being alone were both found to act as antecedents for eating in DE, whereas controls were more likely to eat whilst out in social situations. The affective state of an individual and their surrounding context, appear to be integral to the eating patterns of individuals with DE.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2136
Author(s):  
Haochun Ou ◽  
Chunmei Qing ◽  
Xiangmin Xu ◽  
Jianxiu Jin

Sharing our feelings through content with images and short videos is one main way of expression on social networks. Visual content can affect people’s emotions, which makes the task of analyzing the sentimental information of visual content more and more concerned. Most of the current methods focus on how to improve the local emotional representations to get better performance of sentiment analysis and ignore the problem of how to perceive objects of different scales and different emotional intensity in complex scenes. In this paper, based on the alterable scale and multi-level local regional emotional affinity analysis under the global perspective, we propose a multi-level context pyramid network (MCPNet) for visual sentiment analysis by combining local and global representations to improve the classification performance. Firstly, Resnet101 is employed as backbone to obtain multi-level emotional representation representing different degrees of semantic information and detailed information. Next, the multi-scale adaptive context modules (MACM) are proposed to learn the sentiment correlation degree of different regions for different scale in the image, and to extract the multi-scale context features for each level deep representation. Finally, different levels of context features are combined to obtain the multi-cue sentimental feature for image sentiment classification. Extensive experimental results on seven commonly used visual sentiment datasets illustrate that our method outperforms the state-of-the-art methods, especially the accuracy on the FI dataset exceeds 90%.


Sign in / Sign up

Export Citation Format

Share Document