scholarly journals Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration

2016 ◽  
Vol 28 (4) ◽  
pp. 558-574 ◽  
Author(s):  
Panqu Wang ◽  
Isabel Gauthier ◽  
Garrison Cottrell

Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing [“The Model”, TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a “spreading transform” for faces (separating them in representational space) that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise [Tong, M. H., Joyce, C. A., & Cottrell, G. W. Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation. Brain Research, 1202, 14–24, 2008].

2016 ◽  
Vol 28 (2) ◽  
pp. 282-294 ◽  
Author(s):  
Rankin W. McGugin ◽  
Ana E. Van Gulick ◽  
Isabel Gauthier

The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to nonface objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here, we show an effect of expertise with nonface objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. Whereas participants with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects but rather living and nonliving objects.


2015 ◽  
Vol 15 (12) ◽  
pp. 428
Author(s):  
Rankin McGugin ◽  
Ana Van Gulick ◽  
Isabel Gauthier

2002 ◽  
Vol 18 (1) ◽  
pp. 78-84 ◽  
Author(s):  
Eva Ullstadius ◽  
Jan-Eric Gustafsson ◽  
Berit Carlstedt

Summary: Vocabulary tests, part of most test batteries of general intellectual ability, measure both verbal and general ability. Newly developed techniques for confirmatory factor analysis of dichotomous variables make it possible to analyze the influence of different abilities on the performance on each item. In the testing procedure of the Computerized Swedish Enlistment test battery, eight different subtests of a new vocabulary test were given randomly to subsamples of a representative sample of 18-year-old male conscripts (N = 9001). Three central dimensions of a hierarchical model of intellectual abilities, general (G), verbal (Gc'), and spatial ability (Gv') were estimated under different assumptions of the nature of the data. In addition to an ordinary analysis of covariance matrices, assuming linearity of relations, the item variables were treated as categorical variables in the Mplus program. All eight subtests fit the hierarchical model, and the items were found to load about equally on G and Gc'. The results also indicate that if nonlinearity is not taken into account, the G loadings for the easy items are underestimated. These items, moreover, appear to be better measures of G than the difficult ones. The practical utility of the outcome for item selection and the theoretical implications for the question of the origin of verbal ability are discussed.


2010 ◽  
Vol 50 (15) ◽  
pp. e1-e3 ◽  
Author(s):  
Xiaokun Xu ◽  
Xiaomin Yue ◽  
Mark D. Lescroart ◽  
Irving Biederman ◽  
Jiye G. Kim

2018 ◽  
Vol 129 (8) ◽  
pp. e80-e81
Author(s):  
A. Haeger ◽  
C. Pouzat ◽  
V. Luecken ◽  
K. N’Diaye ◽  
C.E. Elger ◽  
...  

2004 ◽  
Vol 16 (9) ◽  
pp. 1669-1679 ◽  
Author(s):  
Emily D. Grossman ◽  
Randolph Blake ◽  
Chai-Youn Kim

Individuals improve with practice on a variety of perceptual tasks, presumably reflecting plasticity in underlying neural mechanisms. We trained observers to discriminate biological motion from scrambled (nonbiological) motion and examined whether the resulting improvement in perceptual performance was accompanied by changes in activation within the posterior superior temporal sulcus and the fusiform “face area,” brain areas involved in perception of biological events. With daily practice, initially naive observers became more proficient at discriminating biological from scrambled animations embedded in an array of dynamic “noise” dots, with the extent of improvement varying among observers. Learning generalized to animations never seen before, indicating that observers had not simply memorized specific exemplars. In the same observers, neural activity prior to and following training was measured using functional magnetic resonance imaging. Neural activity within the posterior superior temporal sulcus and the fusiform “face area” reflected the participants' learning: BOLD signals were significantly larger after training in response both to animations experienced during training and to novel animations. The degree of learning was positively correlated with the amplitude changes in BOLD signals.


Sign in / Sign up

Export Citation Format

Share Document