scholarly journals Neural codes of seeing architectural styles

2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Jack Nasar ◽  
Bardia Nikrahei ◽  
Dirk B. Walther

AbstractImages of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

2021 ◽  
pp. 225-229
Author(s):  
Dirk B. Walther

How do the brains of experts and non-experts represent entry-level and subordinate-level categories of buildings and places? In the study reviewed in this chapter, the authors measured the brain activity of architecture and psychology students while they viewed images of buildings of different architectural styles as well as general scenes. From functional magnetic resonance imaging (fMRI) patterns, they were able to decode which architectural style participants viewed. Despite finding a strong behavioral expertise effect for architectural styles between the two groups of participants, the authors could not find any differences in brain activity. Surprisingly, they found that the fusiform face area, which is typically not involved in scene perception, was tightly linked with scene-selective brain regions for the decoding of architectural styles but not for entry-level scenes categories.


Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2014 ◽  
Vol 26 (3) ◽  
pp. 490-500 ◽  
Author(s):  
Yaara Erez ◽  
Galit Yovel

Target objects required for goal-directed behavior are typically embedded within multiple irrelevant objects that may interfere with their encoding. Most neuroimaging studies of high-level visual cortex have examined the representation of isolated objects, and therefore, little is known about how surrounding objects influence the neural representation of target objects. To investigate the effect of different types of clutter on the distributed responses to target objects in high-level visual areas, we used fMRI and manipulated the type of clutter. Specifically, target objects (i.e., a face and a house) were presented either in isolation, in the presence of a homogeneous (identical objects from another category) clutter (“pop-out” display), or in the presence of a heterogeneous (different objects) clutter, while participants performed a target identification task. Using multivoxel pattern analysis (MVPA) we found that in the posterior fusiform object area a heterogeneous but not homogeneous clutter interfered with decoding of the target objects. Furthermore, multivoxel patterns evoked by isolated objects were more similar to multivoxel patterns evoked by homogenous compared with heterogeneous clutter in the lateral occipital and posterior fusiform object areas. Interestingly, there was no effect of clutter on the neural representation of the target objects in their category-selective areas, such as the fusiform face area and the parahippocampal place area. Our findings show that the variation among irrelevant surrounding objects influences the neural representation of target objects in the object general area, but not in object category-selective cortex, where the representation of target objects is invariant to their surroundings.


2020 ◽  
Author(s):  
Thomas Murray ◽  
Justin O'Brien ◽  
Noam Sagiv ◽  
Lucia Garrido

Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Alexander Todorov ◽  
Shuo Wang

Abstract An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.


2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Dirk B Walther

Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scenes-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions.


2021 ◽  
Vol 11 (15) ◽  
pp. 6881
Author(s):  
Calvin Chung Wai Keung ◽  
Jung In Kim ◽  
Qiao Min Ong

Virtual reality (VR) is quickly becoming the medium of choice for various architecture, engineering, and construction applications, such as design visualization, construction planning, and safety training. In particular, this technology offers an immersive experience to enhance the way architects review their design with team members. Traditionally, VR has used a desktop PC or workstation setup inside a room, yielding the risk of two users bump into each other while using multiuser VR (MUVR) applications. MUVR offers shared experiences that disrupt the conventional single-user VR setup, where multiple users can communicate and interact in the same virtual space, providing more realistic scenarios for architects in the design stage. However, this shared virtual environment introduces challenges regarding limited human locomotion and interactions, due to physical constraints of normal room spaces. This study thus presented a system framework that integrates MUVR applications into omnidirectional treadmills. The treadmills allow users an immersive walking experience in the simulated environment, without space constraints or hurt potentialities. A prototype was set up and tested in several scenarios by practitioners and students. The validated MUVR treadmill system aims to promote high-level immersion in architectural design review and collaboration.


2021 ◽  
pp. 1-14
Author(s):  
Debo Dong ◽  
Dezhong Yao ◽  
Yulin Wang ◽  
Seok-Jun Hong ◽  
Sarah Genon ◽  
...  

Abstract Background Schizophrenia has been primarily conceptualized as a disorder of high-order cognitive functions with deficits in executive brain regions. Yet due to the increasing reports of early sensory processing deficit, recent models focus more on the developmental effects of impaired sensory process on high-order functions. The present study examined whether this pathological interaction relates to an overarching system-level imbalance, specifically a disruption in macroscale hierarchy affecting integration and segregation of unimodal and transmodal networks. Methods We applied a novel combination of connectome gradient and stepwise connectivity analysis to resting-state fMRI to characterize the sensorimotor-to-transmodal cortical hierarchy organization (96 patients v. 122 controls). Results We demonstrated compression of the cortical hierarchy organization in schizophrenia, with a prominent compression from the sensorimotor region and a less prominent compression from the frontal−parietal region, resulting in a diminished separation between sensory and fronto-parietal cognitive systems. Further analyses suggested reduced differentiation related to atypical functional connectome transition from unimodal to transmodal brain areas. Specifically, we found hypo-connectivity within unimodal regions and hyper-connectivity between unimodal regions and fronto-parietal and ventral attention regions along the classical sensation-to-cognition continuum (voxel-level corrected, p < 0.05). Conclusions The compression of cortical hierarchy organization represents a novel and integrative system-level substrate underlying the pathological interaction of early sensory and cognitive function in schizophrenia. This abnormal cortical hierarchy organization suggests cascading impairments from the disruption of the somatosensory−motor system and inefficient integration of bottom-up sensory information with attentional demands and executive control processes partially account for high-level cognitive deficits characteristic of schizophrenia.


Healthcare ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 412
Author(s):  
Li Cong ◽  
Hideki Miyaguchi ◽  
Chinami Ishizuki

Evidence shows that second language (L2) learning affects cognitive function. Here in this work, we compared brain activation in native speakers of Mandarin (L1) who speak Japanese (L2) between and within two groups (high and low L2 ability) to determine the effect of L2 ability in L1 and L2 speaking tasks, and to map brain regions involved in both tasks. The brain activation during task performance was determined using prefrontal cortex blood flow as a proxy, measured by functional near-infrared spectroscopy (fNIRS). People with low L2 ability showed much more brain activation when speaking L2 than when speaking L1. People with high L2 ability showed high-level brain activation when speaking either L2 or L1. Almost the same high-level brain activation was observed in both ability groups when speaking L2. The high level of activation in people with high L2 ability when speaking either L2 or L1 suggested strong inhibition of the non-spoken language. A wider area of brain activation in people with low compared with high L2 ability when speaking L2 is considered to be attributed to the cognitive load involved in code-switching L1 to L2 with strong inhibition of L1 and the cognitive load involved in using L2.


Semantic Web ◽  
2020 ◽  
pp. 1-16
Author(s):  
Francesco Beretta

This paper addresses the issue of interoperability of data generated by historical research and heritage institutions in order to make them re-usable for new research agendas according to the FAIR principles. After introducing the symogih.org project’s ontology, it proposes a description of the essential aspects of the process of historical knowledge production. It then develops an epistemological and semantic analysis of conceptual data modelling applied to factual historical information, based on the foundational ontologies Constructive Descriptions and Situations and DOLCE, and discusses the reasons for adopting the CIDOC CRM as a core ontology for the field of historical research, but extending it with some relevant, missing high-level classes. Finally, it shows how collaborative data modelling carried out in the ontology management environment OntoME makes it possible to elaborate a communal fine-grained and adaptive ontology of the domain, provided an active research community engages in this process. With this in mind, the Data for history consortium was founded in 2017 and promotes the adoption of a shared conceptualization in the field of historical research.


Sign in / Sign up

Export Citation Format

Share Document