scholarly journals A Robust Neural Index of High Face Familiarity

2018 ◽  
Vol 30 (2) ◽  
pp. 261-272 ◽  
Author(s):  
Holger Wiese ◽  
Simone C. Tüttenberg ◽  
Brandon T. Ingram ◽  
Chelsea Y. X. Chan ◽  
Zehra Gurbuz ◽  
...  

Humans are remarkably accurate at recognizing familiar faces, whereas their ability to recognize, or even match, unfamiliar faces is much poorer. However, previous research has failed to identify neural correlates of this striking behavioral difference. Here, we found a clear difference in brain potentials elicited by highly familiar faces versus unfamiliar faces. This effect starts 200 ms after stimulus onset and reaches its maximum at 400 to 600 ms. This sustained-familiarity effect was substantially larger than previous candidates for a neural familiarity marker and was detected in almost all participants, representing a reliable index of high familiarity. Whereas its scalp distribution was consistent with a generator in the ventral visual pathway, its modulation by repetition and degree of familiarity suggests an integration of affective and visual information.

2019 ◽  
Vol 31 (6) ◽  
pp. 821-836 ◽  
Author(s):  
Elliot Collins ◽  
Erez Freud ◽  
Jana M. Kainerstorfer ◽  
Jiaming Cao ◽  
Marlene Behrmann

Although shape perception is primarily considered a function of the ventral visual pathway, previous research has shown that both dorsal and ventral pathways represent shape information. Here, we examine whether the shape-selective electrophysiological signals observed in dorsal cortex are a product of the connectivity to ventral cortex or are independently computed. We conducted multiple EEG studies in which we manipulated the input parameters of the stimuli so as to bias processing to either the dorsal or ventral visual pathway. Participants viewed displays of common objects with shape information parametrically degraded across five levels. We measured shape sensitivity by regressing the amplitude of the evoked signal against the degree of stimulus scrambling. Experiment 1, which included grayscale versions of the stimuli, served as a benchmark establishing the temporal pattern of shape processing during typical object perception. These stimuli evoked broad and sustained patterns of shape sensitivity beginning as early as 50 msec after stimulus onset. In Experiments 2 and 3, we calibrated the stimuli such that visual information was delivered primarily through parvocellular inputs, which mainly project to the ventral pathway, or through koniocellular inputs, which mainly project to the dorsal pathway. In the second and third experiments, shape sensitivity was observed, but in distinct spatio-temporal configurations from each other and from that elicited by grayscale inputs. Of particular interest, in the koniocellular condition, shape selectivity emerged earlier than in the parvocellular condition. These findings support the conclusion of distinct dorsal pathway computations of object shape, independent from the ventral pathway.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2015 ◽  
Vol 114 (5) ◽  
pp. 2672-2681 ◽  
Author(s):  
Emanuel N. van den Broeke ◽  
André Mouraux ◽  
Antonia H. Groneberg ◽  
Doreen B. Pfau ◽  
Rolf-Detlef Treede ◽  
...  

Secondary hyperalgesia is believed to be a key feature of “central sensitization” and is characterized by enhanced pain to mechanical nociceptive stimuli. The aim of the present study was to characterize, using EEG, the effects of pinprick stimulation intensity on the magnitude of pinprick-elicited brain potentials [event-related potentials (ERPs)] before and after secondary hyperalgesia induced by intradermal capsaicin in humans. Pinprick-elicited ERPs and pinprick-evoked pain ratings were recorded in 19 healthy volunteers, with mechanical pinprick stimuli of varying intensities (0.25-mm probe applied with a force extending between 16 and 512 mN). The recordings were performed before (T0) and 30 min after (T1) intradermal capsaicin injection. The contralateral noninjected arm served as control. ERPs elicited by stimulation of untreated skin were characterized by 1) an early-latency negative-positive complex peaking between 120 and 250 ms after stimulus onset (N120-P240) and maximal at the vertex and 2) a long-lasting positive wave peaking 400–600 ms after stimulus onset and maximal more posterior (P500), which was correlated to perceived pinprick pain. After capsaicin injection, pinprick stimuli were perceived as more intense in the area of secondary hyperalgesia and this effect was stronger for lower compared with higher stimulus intensities. In addition, there was an enhancement of the P500 elicited by stimuli of intermediate intensity, which was significant for 64 mN. The other components of the ERPs were unaffected by capsaicin. Our results suggest that the increase in P500 magnitude after capsaicin is mediated by facilitated mechanical nociceptive pathways.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2021 ◽  
pp. 1-12
Author(s):  
Arnau Puig-Davi ◽  
Saul Martinez-Horta ◽  
Frederic Sampedro ◽  
Andrea Horta-Barba ◽  
Jesus Perez-Perez ◽  
...  

Background: Empathy is a multidimensional construct and a key component of social cognition. In Huntington’s disease (HD), little is known regarding the phenomenology and the neural correlates of cognitive and affective empathy, and regarding how empathic deficits interact with other behavioral and cognitive manifestations. Objective: To explore the cognitive and affective empathy disturbances and related behavioral and neural correlates in HD. Methods: Clinical and sociodemographic data were obtained from 36 healthy controls (HC) and 54 gene-mutation carriers (17 premanifest and 37 early-manifest HD). The Test of Cognitive and Affective Empathy (TECA) was used to characterize cognitive (CE) and affective empathy (AE), and to explore their associations with grey matter volume (GMV) and cortical thickness (Cth). Results: Compared to HC, premanifest participants performed significantly worse in perspective taking (CE) and empathic distress (AE). In symptomatic participants, scores were significantly lower in almost all the TECA subscales. Several empathy subscales were associated with the severity of apathy, irritability, and cognitive deficits. CE was associated with GMV in thalamic, temporal, and occipital regions, and with Cth in parietal and temporal areas. AE was associated with GMV in the basal ganglia, limbic, occipital, and medial orbitofrontal regions, and with Cth in parieto-occipital areas. Conclusion: Cognitive and affective empathy deficits are detectable early, are more severe in symptomatic participants, and involve the disruption of several fronto-temporal, parieto-occipital, basal ganglia, and limbic regions. These deficits are associated with disease severity and contribute to several behavioral symptoms, facilitating the presentation of maladaptive patterns of social interaction.


2018 ◽  
Vol 30 (11) ◽  
pp. 1590-1605 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


2018 ◽  
Vol 71 (6) ◽  
pp. 1396-1404 ◽  
Author(s):  
Catherine Bortolon ◽  
Siméon Lorieux ◽  
Stéphane Raffard

Self-face recognition has been widely explored in the past few years. Nevertheless, the current literature relies on the use of standardized photographs which do not represent daily-life face recognition. Therefore, we aim for the first time to evaluate self-face processing in healthy individuals using natural/ambient images which contain variations in the environment and in the face itself. In total, 40 undergraduate and graduate students performed a forced delayed-matching task, including images of one’s own face, friend, famous and unknown individuals. For both reaction time and accuracy, results showed that participants were faster and more accurate when matching different images of their own face compared to both famous and unfamiliar faces. Nevertheless, no significant differences were found between self-face and friend-face and between friend-face and famous-face. They were also faster and more accurate when matching friend and famous faces compared to unfamiliar faces. Our results suggest that faster and more accurate responses to self-face might be better explained by a familiarity effect – that is, (1) the result of frequent exposition to one’s own image through mirror and photos, (2) a more robust mental representation of one’s own face and (3) strong face recognition units as for other familiar faces.


Author(s):  
Shun Otsubo ◽  
Yasutake Takahashi ◽  
Masaki Haruna ◽  
◽  

This paper proposes an automatic driving system based on a combination of modular neural networks processing human driving data. Research on automatic driving vehicles has been actively conducted in recent years. Machine learning techniques are often utilized to realize an automatic driving system capable of imitating human driving operations. Almost all of them adopt a large monolithic learning module, as typified by deep learning. However, it is inefficient to use a monolithic deep learning module to learn human driving operations (accelerating, braking, and steering) using the visual information obtained from a human driving a vehicle. We propose combining a series of modular neural networks that independently learn visual feature quantities, routes, and driving maneuvers from human driving data, thereby imitating human driving operations and efficiently learning a plurality of routes. This paper demonstrates the effectiveness of the proposed method through experiments using a small vehicle.


2017 ◽  
Vol 29 (9) ◽  
pp. 1621-1631 ◽  
Author(s):  
Mika Koivisto ◽  
Simone Grassini ◽  
Niina Salminen-Vaparanta ◽  
Antti Revonsuo

Detecting the presence of an object is a different process than identifying the object as a particular object. This difference has not been taken into account in designing experiments on the neural correlates of consciousness. We compared the electrophysiological correlates of conscious detection and identification directly by measuring ERPs while participants performed either a task only requiring the conscious detection of the stimulus or a higher-level task requiring its conscious identification. Behavioral results showed that, even if the stimulus was consciously detected, it was not necessarily identified. A posterior electrophysiological signature 200–300 msec after stimulus onset was sensitive for conscious detection but not for conscious identification, which correlated with a later widespread activity. Thus, we found behavioral and neural evidence for elementary visual experiences, which are not yet enriched with higher-level knowledge. The search for the mechanisms of consciousness should focus on the early elementary phenomenal experiences to avoid the confounding effects of higher-level processes.


Sign in / Sign up

Export Citation Format

Share Document