scholarly journals Identification of White Matter Networks Engaged in Object (Face) Recognition Showing Differential Responses to Modulated Stimulus Strength

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Muwei Li ◽  
Zhaohua Ding ◽  
John C Gore

Abstract Blood-oxygenation-level-dependent (BOLD) signals in magnetic resonance imaging indirectly reflect neural activity in cortex, but they are also detectable in white matter (WM). BOLD signals in WM exhibit strong correlations with those in gray matter (GM) in a resting state, but their interpretation and relationship to GM activity in a task are unclear. We performed a parametric visual object recognition task designed to modulate the BOLD signal response in GM regions engaged in higher order visual processing, and measured corresponding changes in specific WM tracts. Human faces embedded in different levels of random noise have previously been shown to produce graded changes in BOLD activation in for example, the fusiform gyrus, as well as in electrophysiological (N170) evoked potentials. The magnitudes of BOLD responses in both GM regions and selected WM tracts varied monotonically with the stimulus strength (noise level). In addition, the magnitudes and temporal profiles of signals in GM and WM regions involved in the task coupled strongly across different task parameters. These findings reveal the network of WM tracts engaged in object (face) recognition and confirm that WM BOLD signals may be directly affected by neural activity in GM regions to which they connect.

2016 ◽  
Author(s):  
Anya Chakraborty ◽  
Bhismadev Chakrabarti

AbstractWe live in an age of ‘selfies’. Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if visual processing of self-faces is different from other faces, using psychophysics and eye-tracking. Specifically, the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition was tested. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look at lower part of the face for longer duration for self-face compared to other-face. Participants with a reduced overlap between self and other face representations, as indexed by a steeper slope of the psychometric response curve for self-face recognition, spent a greater proportion of time looking at the upper regions of faces identified as self. Additionally, the association of autism-related traits with self-face processing metrics was tested, since autism has previously been associated with atypical self-processing, particularly in the psychological domain. Autistic traits were associated with reduced looking time to both self and other faces. However, no self-face specific association was noted with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.


Author(s):  
Tejas Rana

Various experiments or methods can be used for face recognition and detection however two of the main contain an experiment that evaluates the impact of facial landmark localization in the face recognition performance and the second experiment evaluates the impact of extracting the HOG from a regular grid and at multiple scales. We observe the question of feature sets for robust visual object recognition. The Histogram of Oriented Gradients outperform other existing methods like edge and gradient based descriptors. We observe the influence of each stage of the computation on performance, concluding that fine-scale gradients, relatively coarse spatial binning, fine orientation binning and high- quality local contrast normalization in overlapping descriptor patches are all important for good results. Comparative experiments show that though HOG is simple feature descriptor, the proposed HOG feature achieves good results with much lower computational time.


2020 ◽  
Author(s):  
Song Zhao ◽  
Chengzhi Feng ◽  
Xinyin Huang ◽  
Yijun Wang ◽  
Wenfeng Feng

Abstract The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Ella Podvalny ◽  
Matthew W. Flounders ◽  
Leana E. King ◽  
Tom Holroyd ◽  
Biyu J. He

1993 ◽  
Vol 5 (3) ◽  
pp. 419-429 ◽  
Author(s):  
Gale L. Martin

Visual object recognition is often conceived of as a final step in a visual processing system, First, physical information in the raw image is used to isolate and enhance to-be-recognized clumps and then each of the resulting preprocessed representations is fed into the recognizer. This general conception fails when there are no reliable physical cues for isolating the objects, such as when objects overlap. This paper describes an approach, called centered object integrated segmentation and recognition (COISR), for integrating object segmentation and recognition within a single neural network. The application is handprinted character recognition. The approach uses a backpropagation network that scans a field of characters and is trained to recognize whether it is centered over a single character or between characters. When it is centered over a character, the net classifies the character. The approach is tested on a dataset of handprinted digits and high accuracy rates are reported.


Perception ◽  
2020 ◽  
Vol 49 (4) ◽  
pp. 373-404 ◽  
Author(s):  
Marlene Behrmann ◽  
David C. Plaut

Despite the similarity in structure, the hemispheres of the human brain have somewhat different functions. A traditional view of hemispheric organization asserts that there are independent and largely lateralized domain-specific regions in ventral occipitotemporal (VOTC), specialized for the recognition of distinct classes of objects. Here, we offer an alternative account of the organization of the hemispheres, with a specific focus on face and word recognition. This alternative account relies on three computational principles: distributed representations and knowledge, cooperation and competition between representations, and topography and proximity. The crux is that visual recognition results from a network of regions with graded functional specialization that is distributed across both hemispheres. Specifically, the claim is that face recognition, which is acquired relatively early in life, is processed by VOTC regions in both hemispheres. Once literacy is acquired, word recognition, which is co-lateralized with language areas, primarily engages the left VOTC and, consequently, face recognition is primarily, albeit not exclusively, mediated by the right VOTC. We review psychological and neural evidence from a range of studies conducted with normal and brain-damaged adults and children and consider findings which challenge this account. Last, we offer suggestions for future investigations whose findings may further refine this account.


2020 ◽  
Author(s):  
Franziska Geiger ◽  
Martin Schrimpf ◽  
Tiago Marques ◽  
James J. DiCarlo

AbstractAfter training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth” (i.e. no training at all). Third, we find that, by training only ~5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. When these three strategies are applied in combination, we find that these new models achieve ~80% of a fully trained model’s match to the brain, while using two orders of magnitude fewer supervised synaptic updates. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be “wired up” by evolution (a model’s “birth” state) and by developmental learning (a model’s updates based on visual experience).


Author(s):  
Ion Juvina ◽  
Priya Ganapathy ◽  
Matt Sherwood ◽  
Mohd Saif Usmani ◽  
Gautam Kunapuli ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document