THE EFFECT OF LIGHTING ENVIRONMENT ON FACIAL EXPRESSION PERCEPTION IN VIDEO TELECONFERENCING

2021 ◽  
Author(s):  
K. Suzuki ◽  
Y. Takeuchi ◽  
J. Heo

In this study, we investigated whether manipulating the lighting environment in videoconferencing changes the readability of facial expressions. In the experiment, the participants were asked to evaluate their impressions of a video that simulated the situation in a videoconference. A total of 12 lighting conditions were used, including three colour temperature conditions and four lighting directions conditions. As a result of the factor analysis, four factors were identified: "Clarity," "Dynamism," "Naturalness," and "Healthiness." The results of ANOVA showed that placing the lighting in front was effective for all factors. And in all of the factors, it showed that lighting from the front was effective for the participants. In addition, while lower colour temperature decreased clarity, it improved naturalness and healthiness and was particularly effective when the lighting was placed in front of the subject.

Author(s):  
Ralph Reilly ◽  
Andrew Nyaboga ◽  
Carl Guynes

<p class="MsoNormal" style="text-align: justify; margin: 0in 0.5in 0pt;"><span style="layout-grid-mode: line; font-family: &quot;Times New Roman&quot;,&quot;serif&quot;;"><span style="font-size: x-small;">Facial Information Science is becoming a discipline in its own right, attracting not only computer scientists, but graphic animators and psychologists, all of whom require knowledge to understand how people make and interpret facial expressions. (Zeng, 2009). Computer advancements enhance the ability of researchers to study facial expression. Digitized computer-displayed faces can now be used in studies. Current advancements are facilitating not only the researcher&rsquo;s ability to accurately display information, but recording the subject&rsquo;s reaction automatically.<span style="mso-spacerun: yes;">&nbsp; </span><span style="mso-bidi-font-weight: bold;"><span style="mso-spacerun: yes;">&nbsp;</span></span>With increasing interest in Artificial Intelligence and man-machine communications, what importance does the gender of the user play in the design of today&rsquo;s multi-million dollar applications? Does research suggest that men and women respond to the &ldquo;gender&rdquo; of computer displayed images differently? Can this knowledge be used effectively to design applications specifically for use by men or women? This research is an attempt to understand these questions while studying whether automatic, or pre-attentive, processing plays a part in the identification of the facial expressions.</span></span></p>


2002 ◽  
Vol 14 (8) ◽  
pp. 1158-1173 ◽  
Author(s):  
Matthew N. Dailey ◽  
Garrison W. Cottrell ◽  
Curtis Padgett ◽  
Ralph Adolphs

There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of “categorical perception.” In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, “surprise” expressions lie between “happiness” and “fear” expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.


Author(s):  
Laszlo A. Jeni ◽  
◽  
Hideki Hashimoto ◽  
Takashi Kubota ◽  

In human-human communication we use verbal, vocal and non-verbal signals to communicate with others. Facial expressions are a form of non-verbal communication, recognizing them helps to improve the human-machine interaction. This paper proposes a system for pose- and illumination-invariant recognition of facial expressions using near-infrared camera images and precise 3D shape registration. Precise 3D shape information of the human face can be computed by means of Constrained Local Models (CLM), which fits a dense model to an unseen image in an iterative manner. We used a multi-class SVM to classify the acquired 3D shape into different emotion categories. Results surpassed human performance and show poseinvariant performance. Varying lighting conditions can influence the fitting process and reduce the recognition precision. We built a near-infrared and visible light camera array to test the method with different illuminations. Results shows that the near-infrared camera configuration is suitable for robust and reliable facial expression recognition with changing lighting conditions.


2021 ◽  
pp. 027623742199469
Author(s):  
John W. Mullennix ◽  
Amber Hedzik ◽  
Amanda Wolfe ◽  
Lauren Amann ◽  
Bethany Breshears ◽  
...  

The present study examined the effects of affective context on evaluation of facial expression of emotion in portrait paintings. Pleasant, unpleasant, and neutral context photographs were presented prior to target portrait paintings. The participants’ task was to view the portrait painting and choose an emotion label that fit the subject of the painting. The results from Experiment 1 indicated that when preceded by pleasant context, the faces in the portraits were labeled as happier. When preceded by unpleasant context, they were labeled as less happy, sadder, and more fearful. In Experiment 2, the labeling effects disappeared when context photographs were presented at a subthreshold 20 ms SOA. In both experiments, context affected processing times, with times slower for pleasant context and faster for unpleasant context. The results suggest that the context effects depend on both automatic and controlled processing of affective content contained in context photographs.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Children ◽  
2021 ◽  
Vol 8 (8) ◽  
pp. 666
Author(s):  
Javier Cachón-Zagalaz ◽  
Déborah Sanabrias-Moreno ◽  
María Sánchez-Zafra ◽  
Amador Jesús Lara-Sánchez ◽  
María Luisa Zagalaz-Sánchez

Physical Education is one of the subjects that arouses the most interest in children. The aim of this study is to find out the opinion that primary school students have about the Physical Education class. Drawings from a sample of 62 students from an educational centre in the city of Jaén, aged between six and eight years old, were analysed. The results show that the larger size of the drawings corresponds to the aspects that are to be emphasised. This subject is carried out regularly in the sports pavilion of the centre, making frequent use of materials such as sticks, hoops or balls. Cheerful colours are used, reflecting their enthusiasm for the subject. The smiling facial expression represents the schoolchildren’s interest in the subject. The most popular games or sports are basketball and pichi, both of them collective.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2942
Author(s):  
Alessandro Leone ◽  
Andrea Caroppo ◽  
Andrea Manni ◽  
Pietro Siciliano

Drivers’ road rage is among the main causes of road accidents. Each year, it contributes to more deaths and injuries globally. In this context, it is important to implement systems that can supervise drivers by monitoring their level of concentration during the entire driving process. In this paper, a module for Advanced Driver Assistance System is used to minimise the accidents caused by road rage, alerting the driver when a predetermined level of rage is reached, thus increasing the transportation safety. To create a system that is independent of both the orientation of the driver’s face and the lighting conditions of the cabin, the proposed algorithmic pipeline integrates face detection and facial expression classification algorithms capable of handling such non-ideal situations. Moreover, road rage of the driver is estimated through a decision-making strategy based on the temporal consistency of facial expressions classified as “anger” and “disgust”. Several experiments were executed to assess the performance on both a real context and three standard benchmark datasets, two of which containing non-frontal-view facial expression and one which includes facial expression recorded from participants during driving. Results obtained show that the proposed module is competent for road rage estimation through facial expression recognition on the condition of multi-pose and changing in lighting conditions, with the recognition rates that achieve state-of-art results on the selected datasets.


Sign in / Sign up

Export Citation Format

Share Document