BIOLOGICALLY INSPIRED FACE RECOGNITION: TOWARD POSE-INVARIANCE

2012 ◽  
Vol 22 (06) ◽  
pp. 1250029 ◽  
Author(s):  
NOEL TAY NUO WI ◽  
CHU KIONG LOO ◽  
LETCHUMANAN CHOCKALINGAM

A small change in image will cause a dramatic change in signals. Visual system is required to be able to ignore these changes, yet specific enough to perform recognition. This work intends to provide biological-backed insights into 2D translation and scaling invariance and 3D pose-invariance without imposing strain on memory and with biological justification. The model can be divided into lower and higher visual stages. Lower visual stage models the visual pathway from retina to the striate cortex (V1), whereas the modeling of higher visual stage is mainly based on current psychophysical evidences.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yunjun Nam ◽  
Takayuki Sato ◽  
Go Uchida ◽  
Ekaterina Malakhova ◽  
Shimon Ullman ◽  
...  

AbstractHumans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


2013 ◽  
Vol 31 (2) ◽  
pp. 189-195 ◽  
Author(s):  
Youping Xiao

AbstractThe short-wavelength-sensitive (S) cones play an important role in color vision of primates, and may also contribute to the coding of other visual features, such as luminance and motion. The color signals carried by the S cones and other cone types are largely separated in the subcortical visual pathway. Studies on nonhuman primates or humans have suggested that these signals are combined in the striate cortex (V1) following a substantial amplification of the S-cone signals in the same area. In addition to reviewing these studies, this review describes the circuitry in V1 that may underlie the processing of the S-cone signals and the dynamics of this processing. It also relates the interaction between various cone signals in V1 to the results of some psychophysical and physiological studies on color perception, which leads to a discussion of a previous model, in which color perception is produced by a multistage processing of the cone signals. Finally, I discuss the processing of the S-cone signals in the extrastriate area V2.


Perception ◽  
1979 ◽  
Vol 8 (2) ◽  
pp. 143-152 ◽  
Author(s):  
Randolph Blake ◽  
Randall Overton

Two experiments were performed to localize the site of binocular rivalry suppression in relation to the locus of grating adaptation. In one experiment it was found that phenomenal suppression of a high-contrast adaptation grating presented to one eye had no influence on the strength of the threshold-elevation aftereffect measured interocularly. Evidently information about the adaptation grating arrives at the site of the aftereffect (presumably binocular neurons) even during suppression. In a second experiment 60 s of grating adaptation was found to produce a short-term reduction in the predominance of the adapted eye during binocular rivalry. These findings provide converging lines of evidence that suppression occurs at a site in the human visual system after the locus of grating adaptation and, hence, after the striate cortex.


2019 ◽  
Vol 5 (5) ◽  
pp. eaav7903 ◽  
Author(s):  
Khaled Nasr ◽  
Pooja Viswanathan ◽  
Andreas Nieder

Humans and animals have a “number sense,” an innate capability to intuitively assess the number of visual items in a set, its numerosity. This capability implies that mechanisms to extract numerosity indwell the brain’s visual system, which is primarily concerned with visual object recognition. Here, we show that network units tuned to abstract numerosity, and therefore reminiscent of real number neurons, spontaneously emerge in a biologically inspired deep neural network that was merely trained on visual object recognition. These numerosity-tuned units underlay the network’s number discrimination performance that showed all the characteristics of human and animal number discriminations as predicted by the Weber-Fechner law. These findings explain the spontaneous emergence of the number sense based on mechanisms inherent to the visual system.


2017 ◽  
Vol 26 (3) ◽  
pp. 218-224 ◽  
Author(s):  
Gillian Rhodes

Face adaptation generates striking face aftereffects, but is this adaptation useful? The answer appears to be yes, with several lines of evidence suggesting that it contributes to our face-recognition ability. Adaptation to face identity is reduced in a variety of clinical populations with impaired face recognition. In addition, individual differences in face adaptation are linked to face-recognition ability in typical adults. People who adapt more readily to new faces are better at recognizing faces. This link between adaptation and recognition holds for both identity and expression recognition. Adaptation updates face norms, which represent the typical or average properties of the faces we experience. By using these norms to code how faces differ from average, the visual system can make explicit the distinctive information that we need to recognize faces. Thus, adaptive norm-based coding may help us to discriminate and recognize faces despite their similarity as visual patterns.


2008 ◽  
Vol 2008 ◽  
pp. 1-9 ◽  
Author(s):  
Guillermo Botella ◽  
Manuel Rodríguez ◽  
Antonio García ◽  
Eduardo Ros

The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM). This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.


1989 ◽  
Vol 237 (1289) ◽  
pp. 445-469 ◽  

There has long been a problem concerning the presence in the visual cortex of binocularly activated cells that are selective for vertical stimulus disparities because it is generally believed that only horizontal dis­parities contribute to stereoscopic depth perception. The accepted view is that stereoscopic depth estimates are only relative to the fixation point and that independent information from an extraretinal source is needed to scale for absolute or egocentric distance. Recently, however, theor­etical computations have shown that egocentric distance can be esti­mated directly from vertical disparities without recourse to extraretinal sources. There has been little impetus to follow up these computations with experimental observations, because the vertical disparities that normally occur between the images in the two eyes have always been regarded as being too small to be of significance for visual perception and because experiments have consistently shown that our conscious appre­ciation of egocentric distance is rather crude and unreliable. Neverthe­less, the veridicality of stereoscopic depth constancy indicates that accurate distance information is available to the visual system and that the information about egocentric distance and horizontal disparity are processed together so as to continually recalibrate the horizontal dis­parity values for different absolute distances. Computations show that the recalibration can be based directly on vertical disparities without the need for any intervening estimates of absolute distance. This may partly explain the relative crudity of our conscious appreciation of egocentric distance. From published data it has been possible to calculate the magnitude of the vertical disparities that the human visual system must be able to discriminate in order for depth constancy to have the observed level of veridicality. From published data on the induced effect it has also been possible to calculate the threshold values for the detection of vertical disparities by the visual system. These threshold values are smaller than those needed to provide for the recalibration of the horizontal disparities in the interests of veridical depth constancy. An outline is given of the known properties of the binocularly activated cells in the striate cortex that are able to discriminate and assess the vertical disparities. Experi­ments are proposed that should validate, or otherwise, the concepts put forward in this paper.


2019 ◽  
Author(s):  
Gwangsu Kim ◽  
Jaeson Jang ◽  
Seungdae Baek ◽  
Min Song ◽  
Se-Bum Paik

AbstractNumber-selective neurons are observed in numerically naïve animals, but it was not understood how this innate function emerges in the brain. Here, we show that neurons tuned to numbers can arise in random feedforward networks, even in the complete absence of learning. Using a biologically inspired deep neural network, we found that number tuning arises in three cases of networks: one trained to non-numerical natural images, one randomized after trained, and one never trained. Number-tuned neurons showed characteristics that were observed in the brain following the Weber-Fechner law. These neurons suddenly vanished when the feedforward weight variation decreased to a certain level. These results suggest that number tuning can develop from the statistical variation of bottom-up projections in the visual pathway, initializing innate number sense.


Sign in / Sign up

Export Citation Format

Share Document