face analysis
Recently Published Documents


TOTAL DOCUMENTS

197
(FIVE YEARS 58)

H-INDEX

20
(FIVE YEARS 4)

2022 ◽  
Vol 31 (1) ◽  
pp. 555-580
Author(s):  
Rawan Sulaiman Howyan ◽  
Emad Sami Jaha

2021 ◽  
Vol 2085 (1) ◽  
pp. 012013
Author(s):  
Zhiheng Nie

Abstract Vibrating screens play an important role in industrial production. While the cross beam fracture is the main reason for the shutdown of the screen. In order to solve the problem of fatigue fracture of the beam of the large-scale vibrating screen, the 3.6x7.3 m banana vibrating screen was taken as the research object, and the joint surface analysis was introduced at the joint surface of each part of the beam to simulate the stiffness and stress concentration of the joint surface, so that the nonlinear problems will be transformed into linear problems, and the stress and deformation of the connection area are correctly simulated, thereby this project has given an accurate basis for further optimization and optimizing the structure.


2021 ◽  
Vol 7 (10) ◽  
pp. 204
Author(s):  
Vatsa S. Patel ◽  
Zhongliang Nie ◽  
Trung-Nghia Le ◽  
Tam V. Nguyen

Face recognition with wearable items has been a challenging task in computer vision and involves the problem of identifying humans wearing a face mask. Masked face analysis via multi-task learning could effectively improve performance in many fields of face analysis. In this paper, we propose a unified framework for predicting the age, gender, and emotions of people wearing face masks. We first construct FGNET-MASK, a masked face dataset for the problem. Then, we propose a multi-task deep learning model to tackle the problem. In particular, the multi-task deep learning model takes the data as inputs and shares their weight to yield predictions of age, expression, and gender for the masked face. Through extensive experiments, the proposed framework has been found to provide a better performance than other existing methods.


Author(s):  
Diana Kayser ◽  
Hauke Egermann ◽  
Nick E. Barraclough

AbstractAn abundance of studies on emotional experiences in response to music have been published over the past decades, however, most have been carried out in controlled laboratory settings and rely on subjective reports. Facial expressions have been occasionally assessed but measured using intrusive methods such as facial electromyography (fEMG). The present study investigated emotional experiences of fifty participants in a live concert. Our aims were to explore whether automated face analysis could detect facial expressions of emotion in a group of people in an ecologically valid listening context, to determine whether emotions expressed by the music predicted specific facial expressions and examine whether facial expressions of emotion could be used to predict subjective ratings of pleasantness and activation. During the concert, participants were filmed and facial expressions were subsequently analyzed with automated face analysis software. Self-report on participants’ subjective experience of pleasantness and activation were collected after the concert for all pieces (two happy, two sad). Our results show that the pieces that expressed sadness resulted in more facial expressions of sadness (compared to happiness), whereas the pieces that expressed happiness resulted in more facial expressions of happiness (compared to sadness). Differences for other facial expression categories (anger, fear, surprise, disgust, and neutral) were not found. Independent of the musical piece or emotion expressed in the music facial expressions of happiness predicted ratings of subjectively felt pleasantness, whilst facial expressions of sadness and disgust predicted low and high ratings of subjectively felt activation, respectively. Together, our results show that non-invasive measurements of audience facial expressions in a naturalistic concert setting are indicative of emotions expressed by the music, and the subjective experiences of the audience members themselves.


Author(s):  
Petar Jokic ◽  
Erfan Azarkhish ◽  
Regis Cattenoz ◽  
Engin Turetken ◽  
Luca Benini ◽  
...  

2021 ◽  
Vol 40 (1) ◽  
Author(s):  
David Müller ◽  
Andreas Ehlen ◽  
Bernd Valeske

AbstractConvolutional neural networks were used for multiclass segmentation in thermal infrared face analysis. The principle is based on existing image-to-image translation approaches, where each pixel in an image is assigned to a class label. We show that established networks architectures can be trained for the task of multiclass face analysis in thermal infrared. Created class annotations consisted of pixel-accurate locations of different face classes. Subsequently, the trained network can segment an acquired unknown infrared face image into the defined classes. Furthermore, face classification in live image acquisition is shown, in order to be able to display the relative temperature in real-time from the learned areas. This allows a pixel-accurate temperature face analysis e.g. for infection detection like Covid-19. At the same time our approach offers the advantage of concentrating on the relevant areas of the face. Areas of the face irrelevant for the relative temperature calculation or accessories such as glasses, masks and jewelry are not considered. A custom database was created to train the network. The results were quantitatively evaluated with the intersection over union (IoU) metric. The methodology shown can be transferred to similar problems for more quantitative thermography tasks like in materials characterization or quality control in production.


Sign in / Sign up

Export Citation Format

Share Document