scholarly journals The Scanpaths of Subjects with Developmental Prosopagnosia during a Face Memory Task

2019 ◽  
Vol 9 (8) ◽  
pp. 188 ◽  
Author(s):  
Dong-Ho Lee ◽  
Sherryse L Corrow ◽  
Raika Pancaroglu ◽  
Jason J S Barton

The scanpaths of healthy subjects show biases towards the upper face, the eyes and the center of the face, which suggests that their fixations are guided by a feature hierarchy towards the regions most informative for face identification. However, subjects with developmental prosopagnosia have a lifelong impairment in face processing. Whether this is reflected in the loss of normal face-scanning strategies is not known. The goal of this study was to determine if subjects with developmental prosopagnosia showed anomalous scanning biases as they processed the identity of faces. We recorded the fixations of 10 subjects with developmental prosopagnosia as they performed a face memorization and recognition task, for comparison with 8 subjects with acquired prosopagnosia (four with anterior temporal lesions and four with occipitotemporal lesions) and 20 control subjects. The scanning of healthy subjects confirmed a bias to fixate the upper over the lower face, the eyes over the mouth, and the central over the peripheral face. Subjects with acquired prosopagnosia from occipitotemporal lesions had more dispersed fixations and a trend to fixate less informative facial regions. Subjects with developmental prosopagnosia did not differ from the controls. At a single-subject level, some developmental subjects performed abnormally, but none consistently across all metrics. Scanning distributions were not related to scores on perceptual or memory tests for faces. We conclude that despite lifelong difficulty with faces, subjects with developmental prosopagnosia still have an internal facial schema that guides their scanning behavior.

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1199
Author(s):  
Seho Park ◽  
Kunyoung Lee ◽  
Jae-A Lim ◽  
Hyunwoong Ko ◽  
Taehoon Kim ◽  
...  

Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region.


2021 ◽  
pp. 1379-1398
Author(s):  
Norman Waterhouse ◽  
Naresh Noshi ◽  
Niall Kirkpatrick ◽  
Lisa Brendling

Facial ageing occurs as a consequence of multifactorial changes in both the external skin and underlying tissues. The ageing process may vary dramatically between individual patients and is thus influenced by genetic factors. When assessing the ageing face it is important to consider the skeletal architecture, the soft tissue layers including the anterior fat pads, the osseocutaneous ligament anchors, and finally the overlying skin. Assessment of the external skin incorporates factors such as dermal thinning, solar damage, lifestyle effects such as smoking, and Fitzpatrick skin type. Surgical correction of facial ageing attempts to reverse both gravitational change of soft tissues and also to restore volume loss. There are a variety of methods used to divide the face into regions, but for the purpose of this chapter, the surgical management of facial ageing will be separated into three anatomical areas: (1) upper face, including the upper eyelids, eyebrows, and forehead; (2) midface, including the lower eyelid/anterior cheek continuum; and (3) lower and lateral cheek, neck, and perioral region


2021 ◽  
Vol 32 (4) ◽  
pp. 609-639
Author(s):  
Sara Siyavoshi ◽  
Sherman Wilcox

Abstract Signed languages employ finely articulated facial and head displays to express grammatical meanings such as mood and modality, complex propositions (conditionals, causal relations, complementation), information structure (topic, focus), assertions, content and yes/no questions, imperatives, and miratives. In this paper we examine two facial displays: an upper face display in which the eyebrows are pulled together called brow furrow, and a lower face display in which the corners of the mouth are turned down into a distinctive configuration that resembles a frown or upside-down U-shape. Our analysis employs Cognitive Grammar, specifically the control cycle and its manifestation in effective control and epistemic control. Our claim is that effective and epistemic control are associated with embodied actions. Prototypical physical effective control requires effortful activity and the forceful exertion of energy and is commonly correlated with upper face activity, often called the “face of effort.” The lower face display has been shown to be associated with epistemic indetermination, uncertainty, doubt, obviousness, and skepticism. We demonstrate that the control cycle unifies the diverse grammatical functions expressed by each facial display within a language, and that they express similar functions across a wide range of signed languages.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Martin Schiavenato ◽  
Carl L. von Baeyer

Many pain assessment tools for preschool and school-aged children are based on facial expressions of pain. Despite broad use, their metrics are not rooted in the anatomic display of the facial pain expression. We aim to describe quantitatively the patterns of initiation and maintenance of the infant pain expression across an expressive cycle. We evaluated the trajectory of the pain expression of three newborns with the most intense facial display among 63 infants receiving a painful stimulus. A modified “point-pair” system was used to measure movement in key areas across the face by analyzing still pictures from video recording the procedure. Point-pairs were combined into “upper face” and “lower face” variables; duration and intensity of expression were standardized. Intensity and duration of expression varied among infants. Upper and lower face movement rose and overlapped in intensity about 30% into the expression. The expression reached plateau without major change for the duration of the expressive cycle. We conclude that there appears to be a shared pattern in the dynamic trajectory of the pain display among infants expressing extreme intensity. We speculate that these patterns are important in the communication of pain, and their incorporation in facial pain scales may improve current metrics.


2018 ◽  
Author(s):  
Tanja Krumpe ◽  
Peter Gerjets ◽  
Wolfgang Rosenstiel ◽  
Martin Spüler

AbstractDecision making is an essential part of daily life, in which balancing reasons and calculating risks to reach a certain confidence are important to make reasonable choices. To investigate the EEG correlates of confidence during decision making a study involving a forced choice recognition memory task was implemented. Subjects were asked to distinguish old from new pictures and rate their decision with either high or low confidence. Event-related potential (ERP) analysis was performed in four different phases covering all stages of decision making, including the information encoding, retrieval, decision formation, and feedback processing during the recognition task. Additionally, a single trial support-vector machine (SVM) classification was performed on the ERPs of each phase to get a measure of differentiability of the two levels of confidence on a single subject level. It could be shown that the level of decision confidence is significantly reflected in all stages of decision making but most prominently during feedback presentation. The main differences between high and low confidence can be found in the ERPs during feedback presentation after a correct answer, whereas almost no differences can be found in ERPs from feedback to wrong answers. In the feedback phase the two levels of confidence can be separated with a classification accuracy of up to 70 % on average over all subjects, therefore showing potential as a control state in a brain-computer Interface (BCI) application.


2018 ◽  
pp. 825-829 ◽  
Author(s):  
Z. TÜDÖS ◽  
P. HOK ◽  
P. HLUŠTÍK ◽  
A. GRAMBAL

Neuroimaging methods have been used to study differences of brain function between males and females. Differences in working memory have been also investigated, but results of such studies are mixed with respect to behavioral data, reaction times and activated brain areas. We tried to analyze functional MRI data acquired during the working memory task and search for differences of brain activation between genders. 20 healthy right-handed volunteers (10 males and 10 females) participated in the study. All of them were university students or fresh graduates. Subjects underwent block designed verbal working memory task (Item Recognition Task) inside the MRI scanner. Standard single-subject pre-processing and group fMRI analyses were performed using the FEAT software from FSL library. In the behavioral data, there was no statistically significant difference in the number of correct responses during the task. The task activated similar bilateral regions of frontal, parietal, temporal and occipital lobes, basal ganglia, the brainstem and in the cerebellum, which corresponds to the previous verbal working memory neuroimaging research. In direct comparison, there was no statistically significant difference in brain activation between small samples of male and female young healthy volunteers.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2021 ◽  
Vol 14 ◽  
pp. 117954762199457
Author(s):  
Daniele Emedoli ◽  
Maddalena Arosio ◽  
Andrea Tettamanti ◽  
Sandro Iannaccone

Background: Buccofacial Apraxia is defined as the inability to perform voluntary movements of the larynx, pharynx, mandible, tongue, lips and cheeks, while automatic or reflexive control of these structures is preserved. Buccofacial Apraxia frequently co-occurs with aphasia and apraxia of speech and it has been reported as almost exclusively resulting from a lesion of the left hemisphere. Recent studies have demonstrated the benefit of treating apraxia using motor training principles such as Augmented Feedback or Action Observation Therapy. In light of this, the study describes the treatment based on immersive Action Observation Therapy and Virtual Reality Augmented Feedback in a case of Buccofacial Apraxia. Participant and Methods: The participant is a right-handed 58-years-old male. He underwent a neurosurgery intervention of craniotomy and exeresis of infra axial expansive lesion in the frontoparietal convexity compatible with an atypical meningioma. Buccofacial Apraxia was diagnosed by a neurologist and evaluated by the Upper and Lower Face Apraxia Test. Buccofacial Apraxia was quantified also by a specific camera, with an appropriately developed software, able to detect the range of motion of automatic face movements and the range of the same movements on voluntary requests. In order to improve voluntary movements, the participant completed fifteen 1-hour rehabilitation sessions, composed of a 20-minutes immersive Action Observation Therapy followed by a 40-minutes Virtual Reality Augmented Feedback sessions, 5 days a week, for 3 consecutive weeks. Results: After treatment, participant achieved great improvements in quality and range of facial movements, performing most of the facial expressions (eg, kiss, smile, lateral angle of mouth displacement) without unsolicited movement. Furthermore, the Upper and Lower Face Apraxia Test showed an improvement of 118% for the Upper Face movements and of 200% for the Lower Face movements. Conclusion: Performing voluntary movement in a Virtual Reality environment with Augmented Feedbacks, in addition to Action Observation Therapy, improved performances of facial gestures and consolidate the activations by the central nervous system based on principles of experience-dependent neural plasticity.


2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.


Sign in / Sign up

Export Citation Format

Share Document