scholarly journals Platform Biometrics

2019 ◽  
Vol 17 (1/2) ◽  
pp. 54-62 ◽  
Author(s):  
Jeremy W Crampton

This article identifies and analyses the emergence of platform biometrics. Biometrics are measurements of behavioral and physical characteristics, such as facial expressions, gait, galvanic skin response, and palm or iris patterns. Platform biometrics not only promise to connect geographically distant actors but also to curate new forms of value. In this piece I describe Microsoft Face, one of the major facial biometric systems currently on the market; this software promises to analyze which of seven “universal” emotions a subject is experiencing. I then offer a critique of the assumptions behind the system. First, theories of emotion are divided on whether emotions can be reliably and measurably expressed by the face. Second, emotions may not be universal, nor are there likely only seven basic emotions. Third, I draw on the work of Rouvroy and Berns (2013) to identify emotion-recognition technologies as a classic example of algorithmic governance. To outcome algorithmic governance is to enable the subject to creation and govern surveillance.  Platform biometrics will therefore provide a key component of surveillance capitalism’s appropriation of human experience (neuro-liberalism).


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.



Author(s):  
Priya Seshadri ◽  
Youyi Bi ◽  
Jaykishan Bhatia ◽  
Ross Simons ◽  
Jeffrey Hartley ◽  
...  

This study is the first stage of a research program aimed at understanding differences in how people process 2D and 3D automotive stimuli, using psychophysiological tools such as galvanic skin response (GSR), eye tracking, electroencephalography (EEG), and facial expressions coding, along with respondent ratings. The current study uses just one measure, eye tracking, and one stimulus format, 2D realistic renderings of vehicles, to reveal where people expect to find information about brand and other industry-relevant topics, such as sportiness. The eye-gaze data showed differences in the percentage of fixation time that people spent on different views of cars while evaluating the “Brand” and the degree to which they looked “Sporty/Conservative”, “Calm/Exciting”, and “Basic/Luxurious”. The results of this work can give designers insights on where they can invest their design efforts when considering brand and styling cues.



2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.





Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1199
Author(s):  
Seho Park ◽  
Kunyoung Lee ◽  
Jae-A Lim ◽  
Hyunwoong Ko ◽  
Taehoon Kim ◽  
...  

Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region.



Author(s):  
Stephen Karungaru ◽  
◽  
Takuya Akashi ◽  
Minoru Fukumi ◽  
Norio Akamatsu

In this paper, we propose a fully automatic real time one image face gesture simulation using image morphing. Given a single image of a subject, we create several facial expressions of the face by morphing the image based on prior information stored in a data bank. The process involves the automatic detection of the control points both on the target image and the source data. The source data is a string of frames containing the desired facial expressions. A face detection neural network and a lips contour detector using edges and the SNAKES algorithm are employed to detect the face position and features. Five control points and the lips contour, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. In this experiment, using one expressionless face portrait, we create an animation to make it appear like the subject is pronouncing the five Japanese vowels. The final results shows the effectiveness of our method.



2002 ◽  
Vol 14 (8) ◽  
pp. 1264-1274 ◽  
Author(s):  
Ralph Adolphs ◽  
Simon Baron-Cohen ◽  
Daniel Tranel

Lesion, functional imaging, and single-unit studies in human and nonhuman animals have demonstrated a role for the amygdala in processing stimuli with emotional and social significance. We investigated the recognition of a wide variety of facial expressions, including basic emotions (e.g., happiness, anger) and social emotions (e.g., guilt, admiration, flirtatiousness). Prior findings with a standardized set of stimuli indicated that recognition of social emotions can be signaled by the eye region of the face and is disproportionately impaired in autism (Baron-Cohen, Wheelwright, & Jolliffe, 1997). To test the hypothesis that the recognition of social emotions depends on the amygdala, we administered the same stimuli to 30 subjects with unilateral amygdala damage (16 left, 14 right), 2 with bilateral amygdala damage, 47 brain-damaged controls, and 19 normal controls. Compared with controls, subjects with unilateral or bilateral amygdala damage were impaired when recognizing social emotions; moreover, they were more impaired in recognition of social emotions than in recognition of basic emotions, and, like previously described patients with autism, they were impaired also when asked to recognize social emotions from the eye region of the face alone. The findings suggest that the human amygdala is relatively specialized to process stimuli with complex social significance. The results also provide further support for the idea that some of the impairments in social cognition seen in patients with autism may result from dysfunction of the amygdala.



Emotion Recognition is of significance in the modern scenario. Among the many ways to perform it, one of them is through facial expression detection since it is a spontaneous arousal of mental state rather than a conscious effort. Sometimes emotions rule us in the form of the choices, actions and perceptions which are in turn, a result of the emotions we are overpowered by. Happiness, sadness, fear, disgust, anger, neutral and surprise are the seven basic emotions expressed by a human most frequently. In this period of automation and human computer interaction, it is a very difficult and tedious job to make the machines detect the emotions. Facial expressions are the medium through which emotions are shown. For one to detect the facial expression of a person, colour, orientation, lighting and posture play significant importance. Hence, the movements associated with eye, nose, lips etc. plays major role in differentiating the facial features. These facial features are then classified and compared through the trained data. In this paper, we have constructed a Convolution Neural Network (CNN) model and then recognised different emotions for a particular dataset. We have found the accuracy of the model and our main aim is to minimise the loss. We have made use of Adam’s optimizer and used loss function as sparse categorical crossentropy and activation function as softmax. The results which we have got are quite accurate and can be used for further research in this field.



Sign in / Sign up

Export Citation Format

Share Document