featural processing
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 6)

H-INDEX

9
(FIVE YEARS 0)

2022 ◽  
pp. 174702182210768
Author(s):  
Amy Berger ◽  
Regan Fry ◽  
Anna Bobak ◽  
Angela Juliano ◽  
Joseph DeGutis

Previous face matching studies provide evidence that matching same identity faces (match trials) and discriminating different face identities (non-match trials) rely on distinct processes. For example, instructional studies geared towards improving face matching in applied settings have often found selective improvements in match or non-match trials only. Additionally, a small study found that developmental prosopagnosics (DPs) have specific deficits in making match but not non-match judgments. In the current study, we sought to replicate this finding in DPs and examine how individual differences across DPs and controls in match vs. non-match performance relate to featural vs. holistic processing abilities. 43 DPs and 27 controls matched face images shown from similar front views or with varied lighting or viewpoint. Participants also performed tasks measuring featural (eyes/mouth) and holistic processing (part-whole task). We found that DPs showed worse overall matching performance than controls and that their relative match vs. non-match deficit depended on image variation condition, indicating that DPs do not consistently show match- or non-match-specific deficits. When examining the association between holistic and featural processing abilities and match vs. non-match trials in the entire group of DPs and controls, we found a very clear dissociation: Match trials significantly correlated with eye processing ability (r=.48) but not holistic processing (r=.11), whereas non-match trials significantly correlated with holistic processing (r=.32) but not eye processing (r=.03). This suggests that matching same identity faces relies more on eye processing while discriminating different faces relies more on holistic processing.


Author(s):  
Devin Heinze Kehoe ◽  
Jennifer Lewis ◽  
Mazyar Fallah

Oculomotor target selection often requires discriminating visual features, but it remains unclear how oculomotor substrates encoding saccade vectors functionally contribute to this process. One possibility is that oculomotor vector representations (observed directly as physiological activation or inferred from behavioral interference) of potential targets are continuously re-weighted by task-relevance computed elsewhere in specialized visual modules, while an alternative possibility is that oculomotor modules utilize local featural analyses to actively discriminate potential targets. Strengthening the former account, oculomotor vector representations have longer onset latencies for ventral- (i.e., color) than dorsal-stream features (i.e., luminance), suggesting that oculomotor vector representations originate from featurally-relevant specialized visual modules. Here, we extended this reasoning by behaviorally examining whether the onset latency of saccadic interference elicited by visually complex stimuli is greater than is commonly observed for simple stimuli. We measured human saccade metrics (saccade curvature, endpoint deviations, saccade frequency, error proportion) as a function of time after abrupt distractor onset. Distractors were novel, visually complex, and had to be discriminated from targets to guide saccades. The earliest saccadic interference latency was ~110 ms, considerably longer than previous experiments, suggesting that sensory representations projected into the oculomotor system are gated to allow for sufficient featural processing to satisfy task demands. Surprisingly, initial oculomotor vector representations encoded features, as we manipulated the visual similarity between targets and distractors and observed increased vector modulation response magnitude and duration when the distractor was highly similar to the target. Oculomotor vector modulation was gradually extinguished over the time course of the experiment.


2021 ◽  
Author(s):  
Daniel Skorich ◽  
Kenneth I Mavor ◽  
S. Alex Haslam ◽  
Joel L Larwood

The human face is a key source of social information. In particular, it communicates a target’s personal identity and some of their main group memberships. Different models of social perception posit distinct stages at which this group-level and person-level information is extracted from the face, with divergent downstream consequences for cognition and behavior. This paper presents four experiments that explore the time-course of extracting group and person information from faces. In Experiments 1 and 2, we explore the effect of chunked versus unchunked processing on the speed of extracting group versus person information, as well as the impact of familiarity in Experiment 2. In Experiment 3, we examine the effect of the availability of a diagnostic cue on these same judgments. In Experiment 4, we explore the effect of both group-level and person-level prototypicality of face exemplars. Across all four experiments, we find no evidence for the perceptual primacy of either group or person information. Instead, we find that chunked processing, featural processing based on a single diagnostic cue, familiarity, and the prototypicality of face exemplars all result in a processing speed advantage for both group-level and person-level judgments equivalently. These results have important implications for influential models of face processing and impression formation, and can inform — and be integrated with — an understanding of the process of social categorization more broadly.


Neuroscience ◽  
2020 ◽  
Vol 446 ◽  
pp. 157-170
Author(s):  
Hailing Wang ◽  
Ruiyi Qiu ◽  
Wenyu Li ◽  
Shouxin Li ◽  
Shimin Fu

2019 ◽  
Author(s):  
Jeremy Tree

A series of experiments investigated the extent to which a developmentalprosopagnosic, A.A., was able to use configural and featural processing to recognise faces and objects (flowers). A.A. was presented with tasks in which either configural or featural processing was selectively disrupted, by scrambling or blurring. A.A.'s face- matching performance was impaired if faces were blurred to remove their featural details, but remained comparable to that of normal controls if the faces' configural properties were disrupted by stretching. This suggests he is unable to use configural processing to recognise faces, but remains able to recognise them from featural information - consistent with Collishaw and Hole's (2000) suggestion that there are two independent "routes" to face recognition. In contrast, A.A.'s performance with blurred flowers is comparable to that of normal flower experts. AA appears to have a face-specific impairment in using configural processing that does not extend to flower recognition.


2017 ◽  
Vol 43 (4) ◽  
pp. 379-390
Author(s):  
Haoyue Qian ◽  
Zhiguo Wang ◽  
Lingyue Yan ◽  
Xiangping Gao
Keyword(s):  
The Self ◽  

2017 ◽  
Vol 26 (3) ◽  
pp. 288-293 ◽  
Author(s):  
Katrina M. Fincher ◽  
Philip E. Tetlock ◽  
Michael W. Morris

This article links the visual perception of faces and social behavior. We argue that the ways in which people visually encode others’ faces—a rapid-fire perceptual categorization—can result in either humanizing or dehumanizing modes of perception. Our model suggests that these perceptual pathways channel subsequent social inferences and behavior. We focus on the construct of perceptual dehumanization, which involves a shift from configural to featural processing of human faces and, in turn, enables the infliction of harm, such as harsh punishments. We discuss visual attention as an antecedent of perceptual modes and consequent modes of social behavior and speculate about the functions of humanization and dehumanization in sustaining macro-level social structures.


2017 ◽  
Vol 23 (3) ◽  
pp. 287-291 ◽  
Author(s):  
Tamsyn E. Van Rheenen ◽  
Nicole Joshua ◽  
David J Castle ◽  
Susan L. Rossell

AbstractObjectives: Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Methods: Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Results: Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Conclusions: Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287–291)


Sign in / Sign up

Export Citation Format

Share Document