facial information
Recently Published Documents


TOTAL DOCUMENTS

154
(FIVE YEARS 46)

H-INDEX

18
(FIVE YEARS 3)

2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Facial recognition systems use advanced computing to capture facial information and compare the same with proprietary databases for validation. The emergence of data capturing intermediaries and open access image repositories have compounded the need for a holistic perspective for handling the privacy and security challenges associated with FRS. The study presents the results of a bibliometric analysis conducted on the topic of privacy, ethical and security aspects of FRS. This study presents the level of academic discussion on the topic using bibliometric performance analysis. The results of the bibliographic coupling analysis to identify the research hotspots are also presented. The results also include the systematic literature review of 148 publications that are distributed across seven themes. Both the bibliometric and systematic analysis showed that privacy and security in FRS requires a holistic perspective that cuts across privacy, ethical, security, legal, policy and technological aspects.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Fang Sun ◽  
Niuniu Zhang ◽  
Pan Xu ◽  
Zengren Song

In recent years, despite its wide use in various fields, deepfake has been abused to generate hazardous contents such as fake movies, rumors, and fake news by manipulating or replacing facial information of the original sources and, thus, exerts huge security threats to the society. Facing the continuous evolution of deepfake, research on active detection and prevention technology becomes particularly important. In this paper, we propose a new deepfake detection method based on cross-domain fusion, which, on the basis of traditional spatial domain features, realizes the fusion of cross-domain image features by introducing edge geometric features of the frequency domain and, therefore, achieves considerable improvements on classification accuracy. Further evaluations of this method have been performed on publicly deepfake datasets, and the results show that our method is effective particularly on the Meso-4 DeepFake Database.


2021 ◽  
Vol 12 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception – position, spatial-frequency (SF), and orientation – are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that SF diagnosticity showed a U-shaped pattern for face-age categorisation, with information in low and high SFs being diagnostic of child faces, and mid SFs being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important information found in psychophysical studies of face-perception in general (i.e., the eye area, horizontals, and mid-level SFs) is crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real-world challenges.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7498
Author(s):  
Taejae Jeon ◽  
Han Byeol Bae ◽  
Yongju Lee ◽  
Sungjun Jang ◽  
Sangyoun Lee

In recent times, as interest in stress control has increased, many studies on stress recognition have been conducted. Several studies have been based on physiological signals, but the disadvantage of this strategy is that it requires physiological-signal-acquisition devices. Another strategy employs facial-image-based stress-recognition methods, which do not require devices, but predominantly use handcrafted features. However, such features have low discriminating power. We propose a deep-learning-based stress-recognition method using facial images to address these challenges. Given that deep-learning methods require extensive data, we constructed a large-capacity image database for stress recognition. Furthermore, we used temporal attention, which assigns a high weight to frames that are highly related to stress, as well as spatial attention, which assigns a high weight to regions that are highly related to stress. By adding a network that inputs the facial landmark information closely related to stress, we supplemented the network that receives only facial images as the input. Experimental results on our newly constructed database indicated that the proposed method outperforms contemporary deep-learning-based recognition methods.


2021 ◽  
pp. 174702182110416
Author(s):  
Luc Boutsen ◽  
Nathan A Pearson ◽  
Martin Jüttner

Facial disfigurements can influence how observers attend to and interact with the person, leading to disease-avoidance behaviour and emotions (disgust, threat, fear for contagion). However, it is unclear whether this behaviour is reflected in the effect of the facial stigma on attention and perceptual encoding of facial information. We addressed this question by measuring, in a mixed antisaccade task, observers’ speed and accuracy of orienting of visual attention towards or away from peripherally presented upright and inverted unfamiliar faces that had either a realistic looking disease-signalling feature (a skin discoloration), a non-disease-signalling control feature, or no added feature. The presence of a disfiguring or control feature did not influence the orienting of attention (in terms of saccadic latency) towards upright faces, sugesting that avoidance responses towards facial stigma do not occur during covert attention. However, disfiguring and control features signficantly reduced the effect of stimulus inversion on saccadic latency, thus suggesting an impact on the holistic processing of facial information. The implications of these findings for the encoding and appraisal of of facial disfigurements are discussed.


2021 ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Juliane A. Kloess ◽  
Frédéric Gosselin ◽  
Ian Charest ◽  
Jessica Woodhams

Knowing how humans differentiate children from adults has useful implications in many areas of both forensic and cognitive psychology. Yet, how we extract age from faces has been surprisingly underexplored in both disciplines. Here, we used a novel data-driven experimental technique to objectively measure the facial features human observers use to categorise child and adult faces. Relying on more than 35,000 trials, we used a reverse correlation technique that enabled us to reveal how specific features which are known to be important in face-perception––position, spatial-frequency (granularity), and orientation––are associated with accurate child and adult discrimination. This showed that human observers relied on evidence in the nasal bone and eyebrow area for accurate adult categorisation, while they relied on the eye and jawline area to accurately categorise child faces. For orientation structure, only facial information of vertical orientation was linked to face-adult categorisation, while features of horizontal and, to a lesser extent oblique orientations, were more diagnostic of a child face. Finally, we found that spatial-frequency (SF) diagnosticity showed a U-shaped pattern for face-age categorisation, with facial information in low and high spatial frequencies being diagnostic of child faces, and mid spatial frequencies being diagnostic of adult faces. Through this first characterisation of the facial features of face-age categorisation, we show that important face information found in psychophysical studies of face-perception in general (i.e. the eye area, the horizontals, and mid-level SFs) are crucial to the practical context of face-age categorisation, and present data-driven procedures through which face-age classification training could be implemented for real world challenges.


2021 ◽  
Vol 1 ◽  
pp. 190-197
Author(s):  
Wilma Latuny

Abstract Most studies on facial attractiveness have relied on attractiveness judged from photographs rather than video clips. Only a few studies combined images and video sequences as stimuli. In order to determine static and dynamic cues to male attractiveness, we perform behavioural and computational analyses of the Mr. World 2014 contestants. We asked 365 participants to assess the attractiveness of images or video sequences (thin slices) taken from the profile videos of the Mr. World 2014 contestants. Each participant rated the attractiveness on a 7-point scale, ranging from very unattractive to very attractive. In addition, we performed computational analyses of the landmark representations of faces in images and videos to determine which types of static and dynamic facial information predict the attractiveness ratings. The behavioural study revealed that: (1) the attractiveness assessments of images and video sequences are highly correlated, and (2) the attractiveness assessment of videos was on average 0:25 point above that of images. The computational study showed (i) that for images and video sequence, three established measures of attractiveness correlate with attractiveness, and (ii) mouth movements correlate negatively with attractiveness ratings. The conclusion of the study is that thin slices of dynamical facial expressions contribute to the attractiveness of males in two ways: (i) in a positive way and (ii) in a negative way. The positive contribution is that presenting a male face in a dynamic way leads to a slight increase in attractiveness rating. The negative contribution is that mouth movements correlate negatively with attractiveness ratings.


2021 ◽  
Author(s):  
Victoria Root ◽  
Dollyane Muret ◽  
Maite Arribas ◽  
Elena Amoruso ◽  
John Thornton ◽  
...  

Cortical remapping after hand loss in the primary somatosensory cortex (S1) is thought to be predominantly dictated by cortical proximity, with adjacent body parts remapping into the deprived area. Traditionally, this remapping has been characterised by changes in the lip representation, which is assumed to be the immediate neighbour of the hand based on electrophysiological research in non-human primates. However, the orientation of facial somatotopy in humans is debated, with contrasting work reporting both an inverted and upright topography. We aimed to fill this gap in the S1 homunculus by investigating the topographic organisation of the face. Using both univariate and multivariate approaches we examined the extent of face-to-hand remapping in individuals with a congenital and acquired missing hand (hereafter one-handers and amputees, respectively), relative to two-handed controls. Participants were asked to move different facial parts (forehead, nose, lips, tongue) during fMRI scanning. We first report evidence for an upright facial organisation in all three groups, with the upper face and not the lips bordering the hand area. We further found little evidence for remapping of all tested facial parts in amputees, with no significant relationship to the chronicity of their PLP. In contrast, we found converging evidence for a complex pattern of face remapping in congenital one-handers across all facial parts, where the location of the cortical neighbour, the forehead, is shown to shift away from the deprived hand area, which is subsequently activated by the lips and the tongue. Together, our findings demonstrate that the face representation in humans is highly plastic, but that this plasticity is restricted by the developmental stage of input deprivation, rather than cortical proximity.


2021 ◽  
Author(s):  
Haiyang Jin ◽  
Matt Oxner ◽  
Paul Michael Corballis ◽  
William Hayward

Holistic face processing has been widely implicated in conscious face perception. Yet, little is known about whether holistic face processing occurs when faces are processed unconsciously. The present study used the composite face task and continuous flash suppression (CFS) to inspect whether the processing of target facial information (the top half of a face) is influenced by irrelevant information (the bottom half) that is presented unconsciously. Results of multiple experiments showed that the composite effect was observed in both the monocular and CFS conditions, providing the first evidence that the processing of top facial halves is influenced by the aligned bottom halves no matter whether they are presented consciously or unconsciously. However, much of the composite effect for faces without masking was disrupted when bottom facial parts were rendered with CFS. These results suggest that holistic face processing can occur unconsciously, but also highlight the significance of holistic processing of consciously presented faces.


2021 ◽  
Vol 17 (2) ◽  
pp. 176-192
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Inspecting new visual information in a face can affect the perception of subsequently seen faces. In experimental settings for example, previously seen manipulated versions of a face can lead to a clear bias of the participant’s perception of subsequent images: Original images are then perceived as manipulated in the opposite direction of the adaptor while images that are more similar to the adaptor are perceived as normal or natural. These so-called face adaptation effects can be a useful tool to provide information about which facial information is processed and stored in facial memory. Most experiments so far used variants of the second-order relationship configural information (e.g., spatial relations between facial features) when investigating these effects. However, non-configural face information (e.g., color) was mainly neglected when focusing on face adaptation, although this type of information plays an important role in face processing. Therefore, we investigated adaptation effects of non-configural face information by employing brightness alterations. Our results provide clear evidence for brightness adaptation effects (Experiment 1). These effects are face-specific to some extent (Experiments 2 and 3) and robust over time (Experiments 4 and 5). They support the assumption that non-configural face information is not only relevant in face perception but also in face retention. Brightness information seems to be stored in memory and thus is even involved in face recognition.


Sign in / Sign up

Export Citation Format

Share Document