Critical information thresholds underlying concurrent face recognition functions
AbstractHumans rapidly and automatically recognise faces on multiple different levels, yet little is known about how the brain achieves these manifold categorisations concurrently. We bring a new perspective to this emerging issue by probing the relative informational dependencies of two of the most important aspects of human face processing: categorisation of the stimulus as a face (generic face recognition) and categorisation of its familiarity (familiar face recognition). Recording electrophysiological responses to a large set of natural images progressively increasing in image duration (Expt. 1) or spatial frequency content (Expt. 2), we contrasted critical sensory thresholds for these recognition functions as driven by the same face encounters. Across both manipulations, individual observer thresholds were consistently lower for distinguishing faces from other objects than for distinguishing familiar from unfamiliar faces. Moreover, familiar face recognition displayed marked inter-individual variability compared to generic face recognition, with no systematic relationship evident between the two thresholds. Scalp activation was also more strongly right-lateralised at the generic face recognition threshold than at the familiar face recognition threshold. These results suggest that high-level recognition of a face as a face arises based on minimal sensory input (i.e., very brief exposures/coarse resolutions), predominantly in right hemisphere regions. In contrast, the amount of additional sensory evidence required to access face familiarity is highly idiosyncratic and recruits wider neural networks. These findings underscore the neurofunctional distinctions between these two recognition functions, and constitute an important step forward in understanding how the human brain recognises various dimensions of a face in parallel.Significance StatementThe relational dynamics between different aspects of face recognition are not yet well understood. We report relative informational dependencies for two concurrent, ecologically relevant face recognition functions: distinguishing faces from objects, and recognising people we know. Our electrophysiological data show that for a given face encounter, the human brain requires less sensory input to categorise that stimulus as a face than to recognise whether the face is familiar. Moreover, where sensory thresholds for distinguishing faces from objects are remarkably consistent across observers, they vary widely for familiar face recognition. These findings shed new light on the multifaceted nature of human face recognition by painting a more comprehensive picture of the concurrent evidence accumulation processes initiated by seeing a face.