scholarly journals Electrophysiological Studies of Face Perception in Humans

1996 ◽  
Vol 8 (6) ◽  
pp. 551-565 ◽  
Author(s):  
Shlomo Bentin ◽  
Truett Allison ◽  
Aina Puce ◽  
Erik Perez ◽  
Gregory McCarthy

Event-related potentials (ERPs) associated with face perception were recorded with scalp electrodes from normal volunteers. Subjects performed a visual target detection task in which they mentally counted the number of occurrences of pictorial stimuli from a designated category such as butterflies. In separate experiments, target stimuli were embedded within a series of other stimuli including unfamiliar human faces and isolated face components, inverted faces, distorted faces, animal faces, and other nonface stimuli. Human faces evoked a negative potential at 172 msec (N170), which was absent from the ERPs elicited by other animate and inanimate nonface stimuli. N170 was largest over the posterior temporal scalp and was larger over the right than the left hemisphere. N170 was delayed when faces were presented upside-down, but its amplitude did not change. When presented in isolation, eyes elicited an N170 that was significantly larger than that elicited by whole faces, while noses and lips elicited small negative ERPs about 50 msec later than N170. Distorted human faces, in which the locations of inner face components were altered, elicited an N170 similar in amplitude to that elicited by normal faces. However, faces of animals, human hands, cars, and items of furniture did not evoke N170. N170 may reflect the operation of a neural mechanism tuned to detect (as opposed to identify) human faces, similar to the “structural encoder” suggested by Bruce and Young (1986). A similar function has been proposed for the face-selective N200 ERP recorded from the middle fusiform and posterior inferior temporal gyri using subdural electrodes in humans (Allison, McCarthy, Nobre, Puce, & Belger, 1994c). However, the differential sensitivity of N170 to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex. The voltage distribution of N170 over the scalp is consistent with a neural generator located in the occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.

Author(s):  
Shozo Tobimatsu

There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways. The former has excellent spatial resolution with color selectivity, while the latter shows excellent temporal resolution with high contrast sensitivity. Visual stimuli should be tailored to answer specific clinical and/or research questions. This chapter examines the neural mechanisms of face perception using event-related potentials (ERPs). Face stimuli of different spatial frequencies were used to investigate how low-spatial-frequency (LSF) and high-spatial-frequency (HSF) components of the face contribute to the identification and recognition of the face and facial expressions. The P100 component in the occipital area (Oz), the N170 in the posterior temporal region (T5/T6) and late components peaking at 270-390 ms (T5/T6) were analyzed. LSF enhanced P100, while N170 was augmented by HSF irrespective of facial expressions. This suggested that LSF is important for global processing of facial expressions, whereas HSF handles featural processing. There were significant amplitude differences between positive and negative LSF facial expressions in the early time windows of 270-310 ms. Subsequently, the amplitudes among negative HSF facial expressions differed significantly in the later time windows of 330–390 ms. Discrimination between positive and negative facial expressions precedes discrimination among different negative expressions in a sequential manner based on parallel visual channels. Interestingly, patients with schizophrenia showed decreased spatial frequency sensitivities for face processing. Taken together, the spatially filtered face images are useful for exploring face perception and recognition.


2021 ◽  
pp. 095679762199666
Author(s):  
Sebastian Schindler ◽  
Maximilian Bruchmann ◽  
Claudia Krasowski ◽  
Robert Moeck ◽  
Thomas Straube

Our brains rapidly respond to human faces and can differentiate between many identities, retrieving rich semantic emotional-knowledge information. Studies provide a mixed picture of how such information affects event-related potentials (ERPs). We systematically examined the effect of feature-based attention on ERP modulations to briefly presented faces of individuals associated with a crime. The tasks required participants ( N = 40 adults) to discriminate the orientation of lines overlaid onto the face, the age of the face, or emotional information associated with the face. Negative faces amplified the N170 ERP component during all tasks, whereas the early posterior negativity (EPN) and late positive potential (LPP) components were increased only when the emotional information was attended to. These findings suggest that during early configural analyses (N170), evaluative information potentiates face processing regardless of feature-based attention. During intermediate, only partially resource-dependent, processing stages (EPN) and late stages of elaborate stimulus processing (LPP), attention to the acquired emotional information is necessary for amplified processing of negatively evaluated faces.


Author(s):  
Shozo Tobimatsu

There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways. The former has excellent spatial resolution with color selectivity, while the latter shows excellent temporal resolution with high contrast sensitivity. Visual stimuli should be tailored to answer specific clinical and/or research questions. This chapter examines the neural mechanisms of face perception using event-related potentials (ERPs). Face stimuli of different spatial frequencies were used to investigate how low-spatial-frequency (LSF) and high-spatial-frequency (HSF) components of the face contribute to the identification and recognition of the face and facial expressions. The P100 component in the occipital area (Oz), the N170 in the posterior temporal region (T5/T6) and late components peaking at 270-390 ms (T5/T6) were analyzed. LSF enhanced P100, while N170 was augmented by HSF irrespective of facial expressions. This suggested that LSF is important for global processing of facial expressions, whereas HSF handles featural processing. There were significant amplitude differences between positive and negative LSF facial expressions in the early time windows of 270-310 ms. Subsequently, the amplitudes among negative HSF facial expressions differed significantly in the later time windows of 330–390 ms. Discrimination between positive and negative facial expressions precedes discrimination among different negative expressions in a sequential manner based on parallel visual channels. Interestingly, patients with schizophrenia showed decreased spatial frequency sensitivities for face processing. Taken together, the spatially filtered face images are useful for exploring face perception and recognition.


2021 ◽  
Vol 15 ◽  
Author(s):  
Teresa Sollfrank ◽  
Oona Kohnen ◽  
Peter Hilfiker ◽  
Lorena C. Kegel ◽  
Hennric Jokeit ◽  
...  

This study aimed to examine whether the cortical processing of emotional faces is modulated by the computerization of face stimuli (”avatars”) in a group of 25 healthy participants. Subjects were passively viewing 128 static and dynamic facial expressions of female and male actors and their respective avatars in neutral or fearful conditions. Event-related potentials (ERPs), as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS), were derived from the EEG that was recorded during the task. All ERP features, except for the very early N100, differed in their response to avatar and actor faces. Whereas the N170 showed differences only for the neutral avatar condition, later potentials (N300 and LPP) differed in both emotional conditions (neutral and fear) and the presented agents (actor and avatar). In addition, we found that the avatar faces elicited significantly stronger reactions than the actor face for theta and alpha oscillations. Especially theta EEG frequencies responded specifically to visual emotional stimulation and were revealed to be sensitive to the emotional content of the face, whereas alpha frequency was modulated by all the stimulus types. We can conclude that the computerized avatar faces affect both, ERP components and ERD/ERS and evoke neural effects that are different from the ones elicited by real faces. This was true, although the avatars were replicas of the human faces and contained similar characteristics in their expression.


2011 ◽  
Vol 25 (4) ◽  
pp. 174-179 ◽  
Author(s):  
Patrick D. Gajewski ◽  
Petra Stoerig

The N170 ERP component is larger for human faces than objects and sensitive to their orientation and race. To learn how it reflects the processing of faces of different species, we recorded event-related potentials in response to upright or inverted unfamiliar faces of human beings, monkeys, and dogs of different races as well as objects. Upright and inverted faces were presented in a between-subject design and elicited a reliable N170. It decreased from human to monkey to dog faces, and inversion enhanced and delayed it for all categories. We suggest that the results favor categorical over prototypical processing.


Author(s):  
Karen Emmorey

Recent neuroimaging and electrophysiological studies reveal how the reading system successfully adapts when phonological codes are relatively coarse-grained due to reduced auditory input during development. New evidence suggests that the optimal end-state for the reading system may differ for deaf versus hearing adults and indicates that certain neural patterns that are maladaptive for hearing readers may be beneficial for deaf readers. This chapter focuses on deaf adults who are signers and have achieved reading success. Although the left-hemisphere-dominant reading circuit is largely similar in both deaf and hearing individuals, skilled deaf readers exhibit a more bilateral neural response to written words and sentences than their hearing peers, as measured by event-related potentials and functional magnetic resonance imaging. Skilled deaf readers may also rely more on neural regions involved in semantic processing than hearing readers do. Overall, emerging evidence indicates that the neural markers for reading skill may differ for deaf and hearing adults.


2021 ◽  
Vol 11 (1) ◽  
pp. 48
Author(s):  
John Stein

(1) Background—the magnocellular hypothesis proposes that impaired development of the visual timing systems in the brain that are mediated by magnocellular (M-) neurons is a major cause of dyslexia. Their function can now be assessed quite easily by analysing averaged visually evoked event-related potentials (VERPs) in the electroencephalogram (EEG). Such analysis might provide a useful, objective biomarker for diagnosing developmental dyslexia. (2) Methods—in adult dyslexics and normally reading controls, we recorded steady state VERPs, and their frequency content was computed using the fast Fourier transform. The visual stimulus was a black and white checker board whose checks reversed contrast every 100 ms. M- cells respond to this stimulus mainly at 10 Hz, whereas parvocells (P-) do so at 5 Hz. Left and right visual hemifields were stimulated separately in some subjects to see if there were latency differences between the M- inputs to the right vs. left hemispheres, and these were compared with the subjects’ handedness. (3) Results—Controls demonstrated a larger 10 Hz than 5 Hz fundamental peak in the spectra, whereas the dyslexics showed the reverse pattern. The ratio of subjects’ 10/5 Hz amplitudes predicted their reading ability. The latency of the 10 Hz peak was shorter during left than during right hemifield stimulation, and shorter in controls than in dyslexics. The latter correlated weakly with their handedness. (4) Conclusion—Steady state visual ERPs may conveniently be used to identify developmental dyslexia. However, due to the limited numbers of subjects in each sub-study, these results need confirmation.


2002 ◽  
Vol 13 (01) ◽  
pp. 001-013 ◽  
Author(s):  
James Jerger ◽  
Rebecca Estes

We studied auditory evoked responses to the apparent movement of a burst of noise in the horizontal plane. Event-related potentials (ERPs) were measured in three groups of participants: children in the age range from 9 to 12 years, young adults in the age range from 18 to 34 years, and seniors in the age range from 65 to 80 years. The topographic distribution of grand-averaged ERP activity was substantially greater over the right hemisphere in children and seniors but slightly greater over the left hemisphere in young adults. This finding may be related to age-related differences in the extent to which judgments of sound movement are based on displacement versus velocity information.


2005 ◽  
Vol 19 (3) ◽  
pp. 204-215 ◽  
Author(s):  
Thierry Baccino ◽  
Yves Manunta

Abstract. This paper presents a new methodology for studying cognition, which combines eye movements (EM) and event-related potentials (ERP) to track the cognitive processes that occur during a single eye fixation. This technique, called eye-fixation-related potentials (EFRP), has the advantage of coupling accurate time measures from ERPs and the location of the eye on the stimulus, so it can be used to disentangle perceptual/attentional/cognitive factors affecting reading. We tested this new technique to describe the controversial parafoveal-on-foveal effects on reading, which concern the question of whether two consecutive words are processed in parallel or sequentially. The experiment directly addressed this question by looking at whether semantic relatedness on a target word in a reading-like situation might affect the processing of a prime word. Three pair-word conditions were tested: A semantically associated target word (horse-mare), a semantically nonassociated target word (horse-table) and a nonword (horse-twsui); EFRPs were compared for all conditions. The results revealed that early ERP components differentiated word and nonword processing within 119 ms postfixation (N1 component). Moreover, the amplitude of the right centrofrontal P140 varied as a function of word type, being larger in response to nonassociated words than to nonwords. This component might index a spatial attention shift to the target word and its visual categorization, being highly sensitive to orthographic regularity and “ill-formedness” of words. The P2 consecutive component (peaking at 215 ms) differentiated associated words and nonassociated words, which can account for the semantic parafoveal effect. The EFRP technique, therefore, appears to be fruitful for establishing a time-line of early cognitive processes during reading.


1999 ◽  
Vol 11 (6) ◽  
pp. 598-609 ◽  
Author(s):  
Charan Ranganath ◽  
Ken A. Paller

Previous neuropsychological and neuroimaging results have implicated the prefrontal cortex in memory retrieval, although its precise role is unclear. In the present study, we examined patterns of brain electrical activity during retrieval of episodic and semantic memories. In the episodic retrieval task, participants retrieved autobiographical memories in response to event cues. In the semantic retrieval task, participants generated exemplars in response to category cues. Novel sounds presented intermittently during memory retrieval elicited a series of brain potentials including one identifiable as the P3a potential. Based on prior research linking P3a with novelty detection and with the frontal lobes, we predicted that P3a would be reduced to the extent that novelty detection and memory retrieval interfere with each other. Results during episodic and semantic retrieval tasks were compared to results during a task in which subjects attended to the auditory stimuli. P3a amplitudes were reduced during episodic retrieval, particularly at right lateral frontal scalp locations. A similar but less lateralized pattern of frontal P3a reduction was observed during semantic retrieval. These findings support the notion that the right prefrontal cortex is engaged in the service of memory retrieval, particularly for episodic memories.


Sign in / Sign up

Export Citation Format

Share Document