Face Pareidolia Recruits Mechanisms for Detecting Human Social Attention

2020 ◽  
Vol 31 (8) ◽  
pp. 1001-1012 ◽  
Author(s):  
Colin J. Palmer ◽  
Colin W. G. Clifford

Face pareidolia is the phenomenon of seeing facelike structures in everyday objects. Here, we tested the hypothesis that face pareidolia, rather than being limited to a cognitive or mnemonic association, reflects the activation of visual mechanisms that typically process human faces. We focused on sensory cues to social attention, which engage cell populations in temporal cortex that are susceptible to habituation effects. Repeated exposure to “pareidolia faces” that appear to have a specific direction of attention causes a systematic bias in the perception of where human faces are looking, indicating that overlapping sensory mechanisms are recruited when we view human faces and when we experience face pareidolia. These cross-adaptation effects are significantly reduced when pareidolia is abolished by removing facelike features from the objects. These results indicate that face pareidolia is essentially a perceptual phenomenon, occurring when sensory input is processed by visual mechanisms that have evolved to extract specific social content from human faces.

2015 ◽  
Author(s):  
Daniel D Dilks ◽  
Peter Cook ◽  
Samuel K Weiller ◽  
Helen P Berns ◽  
Mark H Spivak ◽  
...  

Recent behavioral evidence suggests that dogs, like humans and monkeys, are capable of visual face recognition. But do dogs also exhibit specialized cortical face regions similar to humans and monkeys? Using functional magnetic resonance imaging (fMRI) in six dogs trained to remain motionless during scanning without restraint or sedation, we found a region in the canine temporal lobe that responded significantly more to movies of human faces than to movies of everyday objects. Next, using a new stimulus set to investigate face selectivity in this predefined candidate dog face area, we found that this region responded similarly to images of human faces and dog faces, yet significantly more to both human and dog faces than to images of objects. Such face selectivity was not found in dog primary visual cortex. Taken together, these findings: 1) provide the first evidence for a face-selective region in the temporal cortex of dogs, which cannot be explained by simple low-level visual feature extraction; 2) reveal that neural machinery dedicated to face processing is not unique to primates; and 3) may help explain dogs’ exquisite sensitivity to human social cues.


2015 ◽  
Author(s):  
Daniel D Dilks ◽  
Peter Cook ◽  
Samuel K Weiller ◽  
Helen P Berns ◽  
Mark H Spivak ◽  
...  

Recent behavioral evidence suggests that dogs, like humans and monkeys, are capable of visual face recognition. But do dogs also exhibit specialized cortical face regions similar to humans and monkeys? Using functional magnetic resonance imaging (fMRI) in six dogs trained to remain motionless during scanning without restraint or sedation, we found a region in the canine temporal lobe that responded significantly more to movies of human faces than to movies of everyday objects. Next, using a new stimulus set to investigate face selectivity in this predefined candidate dog face area, we found that this region responded similarly to images of human faces and dog faces, yet significantly more to both human and dog faces than to images of objects. Such face selectivity was not found in dog primary visual cortex. Taken together, these findings: 1) provide the first evidence for a face-selective region in the temporal cortex of dogs, which cannot be explained by simple low-level visual feature extraction; 2) reveal that neural machinery dedicated to face processing is not unique to primates; and 3) may help explain dogs’ exquisite sensitivity to human social cues.


1994 ◽  
Vol 6 (2) ◽  
pp. 99-116 ◽  
Author(s):  
M. W. Oram ◽  
D. I. Perrett

Cells have been found in the superior temporal polysensory area (STPa) of the macaque temporal cortex that are selectively responsive to the sight of particular whole body movements (e.g., walking) under normal lighting. These cells typically discriminate the direction of walking and the view of the body (e.g., left profile walking left). We investigated the extent to which these cells are responsive under “biological motion” conditions where the form of the body is defined only by the movement of light patches attached to the points of limb articulation. One-third of the cells (25/72) selective for the form and motion of walking bodies showed sensitivity to the moving light displays. Seven of these cells showed only partial sensitivity to form from motion, in so far as the cells responded more to moving light displays than to moving controls but failed to discriminate body view. These seven cells exhibited directional selectivity. Eighteen cells showed statistical discrimination for both direction of movement and body view under biological motion conditions. Most of these cells showed reduced responses to the impoverished moving light stimuli compared to full light conditions. The 18 cells were thus sensitive to detailed form information (body view) from the pattern of articulating motion. Cellular processing of the global pattern of articulation was indicated by the observations that none of these cells were found sensitive to movement of individual limbs and that jumbling the pattern of moving limbs reduced response magnitude. A further 10 cells were tested for sensitivity to moving light displays of whole body actions other than walking. Of these cells 5/10 showed selectivity for form displayed by biological motion stimuli that paralleled the selectivity under normal lighting conditions. The cell responses thus provide direct evidence for neural mechanisms computing form from nonrigid motion. The selectivity of the cells was for body view, specific direction, and specific type of body motion presented by moving light displays and is not predicted by many current computational approaches to the extraction of form from motion.


PLoS ONE ◽  
2016 ◽  
Vol 11 (3) ◽  
pp. e0149431 ◽  
Author(s):  
Laura V. Cuaya ◽  
Raúl Hernández-Pérez ◽  
Luis Concha

2017 ◽  
Author(s):  
Raúl Hernández-Pérez ◽  
Luis Concha ◽  
Laura V. Cuaya

AbstractDogs can interpret emotional human faces (especially the ones expressing happiness), yet the cerebral correlates of this process are unknown. Using functional magnetic resonance imaging (fMRI) we studied eight awake and unrestrained dogs. In Experiment 1 dogs observed happy and neutral human faces, and found increased brain activity when viewing happy human faces in temporal cortex and caudate. In Experiment 2 the dogs were presented with human faces expressing happiness, anger, fear, or sadness. Using the resulting cluster from Experiment 1 we trained a linear support vector machine classifier to discriminate between pairs of emotions and found that it could only discriminate between happiness and the other emotions. Finally, evaluation of the whole-brain fMRI time courses through a similar classifier allowed us to predict the emotion being observed by the dogs. Our results show that human emotions are specifically represented in dogs’ brains, highlighting their importance for inter-species communication.


Author(s):  
Akio Nakamura

Using multi-channel near-infrared spectroscopy, the authors sought to monitor cortical activity during the sensory evaluation period to evaluate the effect of flavorings on taste caused by central integration of olfactory and gustatory modalities. They noted that the neocortical response to a test solution showed adaptation by the conditional sugar solution, which was administered 60 seconds before the test solution. Sugar-sugar self adaptation was greater than sugar-artificial sweetener cross adaptation recorded at specific regions of the frontal and temporal cortex. The magnitude of sugar-flavored artificial sweetener cross adaptation tended to approach that of sugar-sugar self adaptation. Therefore, the similarity of the adaptation of cortical responses might be an important indicator in the screening of effective flavorings in order to improve taste.


2020 ◽  
Vol 10 (21) ◽  
pp. 7590
Author(s):  
Kazushige Oshita ◽  
Sumio Yano

This study investigated the effects of haptic sensory input by different types of clothing worn on gait performance. Twelve healthy men performed normal and tandem gait tests with blindfolds under three different clothing conditions: (1) wearing only half tights (HT); (2) wearing a skirt-like draped outfit such as a cotton cloth wrapped around the waist and extended to the lower leg (DC); and (3) wearing a trouser-like outfit such as tracksuit bottoms (TS). Although gait speed was significantly increased in DC as compared with HT, this was not observed in TS. Missteps during tandem gait were significantly reduced with DC. In addition, DC made walking easier for individuals as compared with TS. These findings suggest that wearing a skirt-like outfit such as kilts in Scotland or the hakama in Japan may provide haptic sensory cues to enhance individuals’ perceptions of their body orientation as compared with trouser-like clothing that is in continuous contact with the legs.


2015 ◽  
Vol 113 (6) ◽  
pp. 1896-1906 ◽  
Author(s):  
William K. Page ◽  
Nobuya Sato ◽  
Michael T. Froehler ◽  
William Vaughn ◽  
Charles J. Duffy

Navigation relies on the neural processing of sensory cues about observer self-movement and spatial location. Neurons in macaque dorsal medial superior temporal cortex (MSTd) respond to visual and vestibular self-movement cues, potentially contributing to navigation and orientation. We moved monkeys on circular paths around a room while recording the activity of MSTd neurons. MSTd neurons show a variety of sensitivities to the monkey's heading direction, circular path through the room, and place in the room. Changing visual cues alters the relative prevalence of those response properties. Disrupting the continuity of self-movement paths through the environment disrupts path selectivity in a manner linked to the time course of single neuron responses. We hypothesize that sensory cues interact with the spatial and temporal integrative properties of MSTd neurons to derive path selectivity for navigational path integration supporting spatial orientation.


2021 ◽  
Author(s):  
Diane Rekow ◽  
Jean-Yves Baudouin ◽  
Karine Durand ◽  
Arnaud Leleu

Visual categorization is the brain ability to rapidly and automatically respond to widely variable visual inputs in a category-selective manner (i.e., distinct responses between categories and similar responses within categories). Whether category-selective neural responses are purely visual or can be influenced by other sensory modalities remains unclear. Here, we test whether odors modulate visual categorization, expecting that odors facilitate the neural categorization of congruent visual objects, especially when the visual category is ambiguous. Scalp electroencephalogram (EEG) was recorded while natural images depicting various objects were displayed in rapid 12-Hz streams (i.e., 12 images / second) and variable exemplars of a target category (either human faces, cars, or facelike objects in dedicated sequences) were interleaved every 9th stimulus to tag category-selective responses at 12/9 = 1.33 Hz in the EEG frequency spectrum. During visual stimulation, participants (N = 26) were implicitly exposed to odor contexts (either body, gasoline or baseline odors) and performed an orthogonal cross-detection task. We identify clear category-selective responses to every category over the occipito-temporal cortex, with the largest response for human faces and the lowest for facelike objects. Critically, body odor boosts the response to the ambiguous facelike objects (i.e., either perceived as nonface objects or faces) over the right hemisphere, especially for participants reporting their presence post-stimulation. By contrast, odors do not significantly modulate other category-selective responses, nor the general visual response recorded at 12 Hz, revealing a specific influence on the categorization of congruent ambiguous stimuli. Overall, these findings support the view that the brain actively uses cues from the different senses to readily categorize visual inputs, and that olfaction, which is generally considered as poorly functional in humans, is well placed to disambiguate visual information.


2013 ◽  
Vol 25 (5) ◽  
pp. 777-789 ◽  
Author(s):  
Dzmitry A. Kaliukhovich ◽  
Wouter De Baene ◽  
Rufin Vogels

Stimulus repetition produces a decrease of the response in many cortical areas and different modalities. This adaptation is highly prominent in macaque inferior temporal (IT) neurons. Here we ask how these repetition-induced changes in IT responses affect the accuracy by which IT neurons encode objects. This question bears on the functional consequences of adaptation, which are still unclear. We recorded the responses of single IT neurons to sequences of familiar shapes, each shown for 300 msec with an ISI of the same duration. The difference in shape between the two successively presented stimuli,that is, adapter and test, varied parametrically. The discriminability of the test stimuli was reduced for repeated compared with nonrepeated stimuli. In some conditions for which adapter and test shapes differed, the cross-adaptation resulted in an enhanced discriminability. These single cell results were confirmed in a second experiment in which we recorded multiunit spiking activity using a laminar microelectrode in macaque IT. Two familiar stimuli were presented successively for 500 msec each and separated with an ISI of the same duration. Trials consisted either of a repetition of the same stimulus or of their alternation. Small neuronal populations showed decreased classification accuracy for repeated compared with nonrepeated test stimuli, but classification was enhanced for the test compared with adapter stimuli when the test stimulus differed from recently seen stimuli. These findings suggest that short-term, stimulus-specific adaptation in IT supports efficient coding of stimuli that differ from recently seen ones while impairing the coding of repeated stimuli.


Sign in / Sign up

Export Citation Format

Share Document