scholarly journals A Rapid Subcortical Amygdala Route for Faces Irrespective of Spatial Frequency and Emotion

2016 ◽  
Author(s):  
Jessica McFadyen ◽  
Martial Mermillod ◽  
Jason B. Mattingley ◽  
Veronika Halász ◽  
Marta I. Garrido

ABSTRACTThere is significant controversy over the anatomical existence and potential function of a direct subcortical visual pathway to the amygdala. It is thought that this pathway rapidly transmits low spatial frequency information to the amygdala independently of the cortex and yet this function has never been causally determined. In this study, neural activity was measured using magnetoencephalography (MEG) while participants discriminated the gender of neutral and fearful faces filtered for low or high spatial frequencies. Dynamic causal modelling (DCM) revealed that the most likely underlying neural network consisted of a subcortical pulvino-amygdala connection that was not modulated by spatial frequency or emotion and a cortico-amygdala connection that conveyed predominantly high spatial frequencies. Crucially, data-driven neural simulations demonstrated a clear temporal advantage of the subcortical route (70ms) over the cortical route (155ms) in influencing amygdala activity. Thus, our findings support the existence of a rapid functional subcortical pathway that is unselective of the spatial frequency or emotional content of faces.

2020 ◽  
Author(s):  
Patricia Soto-Icaza ◽  
Brice Beffara Bret ◽  
Lorena Vargas ◽  
Francisco Aboitiz ◽  
Pablo Billeke

Broader Autism Phenotype (BAP) defines heritable features present in unaffected relatives of individuals with autism. BAP affects face perception, an impairment associated with the magnocellular (M) visual pathway that processes information of low spatial frequency and the parvocellular (P) visual pathway that processes information of high spatial frequency. Here we tested the hypothesis that parents of children with Autism Spectrum Disorder (pASD), who are BAP candidates, present altered M and P pathways integration for the processing of facial emotions information as compared to parents of typically developing children (pTD). For this end, we carried out electroencephalographic recordings in pTD and pASD, while they had to recognize emotions of face pictures composed by the same or different emotions (happiness or anger) presented in different spatial frequencies. We found no significant differences in the accuracy between groups but lower amplitude in a late frontoparietal potential activity, when happiness emotion was displayed in both spatial frequencies in pASD. Source analysis showed a difference in the right posterior part of the superior temporal region. These results reveal an alteration in brain processing of facial emotion in BAP that could be a neuronal marker of ASD.


2021 ◽  
Author(s):  
Isabelle Charbonneau ◽  
Joël Guérette ◽  
Stéphanie Cormier ◽  
Caroline Blais ◽  
Guillaume Lalonde-Beaudoin ◽  
...  

Abstract Studies on low-level visual information underlying pain categorization have led to inconsistent findings. Some are showing an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SF for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2 to 4.8 meters) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two experimental tasks.


2013 ◽  
Vol 25 (6) ◽  
pp. 862-871 ◽  
Author(s):  
Bradford Z. Mahon ◽  
Nicholas Kumar ◽  
Jorge Almeida

It is widely argued that the ability to recognize and identify manipulable objects depends on the retrieval and simulation of action-based information associated with using those objects. Evidence for that view comes from fMRI studies that have reported differential BOLD contrast in dorsal visual stream regions when participants view manipulable objects compared with a range of baseline categories. An alternative interpretation is that processes internal to the ventral visual pathway are sufficient to support the visual identification of manipulable objects and that the retrieval of object-associated use information is contingent on analysis of the visual input by the ventral stream. Here, we sought to distinguish these two perspectives by exploiting the fact that the dorsal stream is largely driven by magnocellular input, which is biased toward low spatial frequency visual information. Thus, any tool-selective responses in parietal cortex that are driven by high spatial frequencies would be indicative of inputs from the ventral visual pathway. Participants viewed images of tools and animals containing only low, or only high, spatial frequencies during fMRI. We find an internal parcellation of left parietal “tool-preferring” voxels: Inferior aspects of left parietal cortex are driven by high spatial frequency information and have privileged connectivity with ventral stream regions that show similar category preferences, whereas superior regions are driven by low spatial frequency information. Our findings suggest that the automatic activation of complex object-associated manipulation knowledge is contingent on analysis of the visual input by the ventral visual pathway.


2019 ◽  
Vol 5 (1) ◽  
pp. 451-477 ◽  
Author(s):  
Daniel A. Butts

With modern neurophysiological methods able to record neural activity throughout the visual pathway in the context of arbitrarily complex visual stimulation, our understanding of visual system function is becoming limited by the available models of visual neurons that can be directly related to such data. Different forms of statistical models are now being used to probe the cellular and circuit mechanisms shaping neural activity, understand how neural selectivity to complex visual features is computed, and derive the ways in which neurons contribute to systems-level visual processing. However, models that are able to more accurately reproduce observed neural activity often defy simple interpretations. As a result, rather than being used solely to connect with existing theories of visual processing, statistical modeling will increasingly drive the evolution of more sophisticated theories.


2014 ◽  
Vol 26 (11) ◽  
pp. 2564-2577 ◽  
Author(s):  
Roberto Cecere ◽  
Caterina Bertini ◽  
Martin E. Maier ◽  
Elisabetta Làdavas

Visual threat-related signals are not only processed via a cortical geniculo-striatal pathway to the amygdala but also via a subcortical colliculo-pulvinar-amygdala pathway, which presumably mediates implicit processing of fearful stimuli. Indeed, hemianopic patients with unilateral damage to the geniculo-striatal pathway have been shown to respond faster to seen happy faces in their intact visual field when unseen fearful faces were concurrently presented in their blind field [Bertini, C., Cecere, R., & Làdavas, E. I am blind, but I “see” fear. Cortex, 49, 985–993, 2013]. This behavioral facilitation in the presence of unseen fear might reflect enhanced processing of consciously perceived faces because of early activation of the subcortical pathway for implicit fear perception, which possibly leads to a modulation of cortical activity. To test this hypothesis, we examined ERPs elicited by fearful and happy faces presented to the intact visual field of right and left hemianopic patients, whereas fearful, happy, or neutral faces were concurrently presented in their blind field. Results showed that the amplitude of the N170 elicited by seen happy faces was selectively increased when an unseen fearful face was concurrently presented in the blind field of right hemianopic patients. These results suggest that when the geniculo-striate visual pathway is lesioned, the rapid and implicit processing of threat signals can enhance facial encoding. Notably, the N170 modulation was only observed in left-lesioned patients, favoring the hypothesis that implicit subcortical processing of fearful signals can influence face encoding only when the right hemisphere is intact.


2017 ◽  
Author(s):  
Laura Cabral ◽  
Bobby Stojanoski ◽  
Rhodri Cusack

Humans have structures dedicated to the processing of faces, which include cortical components (e.g. areas in occipital and temporal lobes) and subcortical components (e.g. superior colliculus and amygdala). Although faces are processed more quickly than stimuli from other categories, there is a lack of consensus regarding whether cortical or subcortical structures are responsible for rapid face processing. In order to probe this, we exploited the asymmetry in the strength of projections to subcortical structures between the nasal and temporal hemiretina. Participants detected faces from unrecognizable control stimuli and performed the same task for houses. In Experiments 1 and 3, at the fastest reaction times, participants detected faces more accurately than houses. However, there was no benefit of presenting to the subcortical pathway. In Experiment 2, we probed the coarseness of the rapid pathway, making the foil stimuli more similar to faces and houses. This eliminated the rapid detection advantage, suggesting that rapid face processing is limited to coarse representations. In Experiment 4, we sought to determine whether the natural difference between spatial frequencies of faces and houses were driving the effects seen in Experiments 1 and 3. We spatially filtered the faces and houses so that they were matched. Better rapid detection was again found for faces relative to houses, but we found no benefit of preferentially presenting to the subcortical pathway. Taken together, the results of our experiments suggest a cortical, coarse rapid detection mechanism, which was not dependent on spatial frequency.


2021 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception––position, spatial-frequency (granularity), and orientation––are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that spatial-frequency (SF) diagnosticity showed a U-shaped pattern for face-age categorisation, with facial information in low and high spatial frequencies being diagnostic of child faces, and mid spatial frequencies being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important face information found in psychophysical studies of face-perception in general (i.e. the eye area, the horizontals, and mid-level SFs) are crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real world challenges.


2011 ◽  
Vol 1371 ◽  
pp. 87-99 ◽  
Author(s):  
Carmen Morawetz ◽  
Juergen Baudewig ◽  
Stefan Treue ◽  
Peter Dechent

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Isabelle Charbonneau ◽  
Joël Guérette ◽  
Stéphanie Cormier ◽  
Caroline Blais ◽  
Guillaume Lalonde-Beaudoin ◽  
...  

AbstractStudies on low-level visual information underlying pain categorization have led to inconsistent findings. Some show an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori expectations about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SFs for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2–4.8 m) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two complementary experimental tasks.


2021 ◽  
Vol 11 (4) ◽  
pp. 1829
Author(s):  
Davide Grande ◽  
Catherine A. Harris ◽  
Giles Thomas ◽  
Enrico Anderlini

Recurrent Neural Networks (RNNs) are increasingly being used for model identification, forecasting and control. When identifying physical models with unknown mathematical knowledge of the system, Nonlinear AutoRegressive models with eXogenous inputs (NARX) or Nonlinear AutoRegressive Moving-Average models with eXogenous inputs (NARMAX) methods are typically used. In the context of data-driven control, machine learning algorithms are proven to have comparable performances to advanced control techniques, but lack the properties of the traditional stability theory. This paper illustrates a method to prove a posteriori the stability of a generic neural network, showing its application to the state-of-the-art RNN architecture. The presented method relies on identifying the poles associated with the network designed starting from the input/output data. Providing a framework to guarantee the stability of any neural network architecture combined with the generalisability properties and applicability to different fields can significantly broaden their use in dynamic systems modelling and control.


Sign in / Sign up

Export Citation Format

Share Document