The impact of weak ground truth and facial expressiveness on affect detection accuracy from time-continuous videos of facial expressions

2013 ◽  
Vol 249 ◽  
pp. 13-23 ◽  
Author(s):  
Marko Tkalčič ◽  
Ante Odić ◽  
Andrej Košir
2020 ◽  
Vol 51 (5) ◽  
pp. 685-711
Author(s):  
Alexandra Sierra Rativa ◽  
Marie Postma ◽  
Menno Van Zaanen

Background. Empathic interactions with animated game characters can help improve user experience, increase immersion, and achieve better affective outcomes related to the use of the game. Method. We used a 2x2 between-participant design and a control condition to analyze the impact of the visual appearance of a virtual game character on empathy and immersion. The four experimental conditions of the game character appearance were: Natural (virtual animal) with expressiveness (emotional facial expressions), natural (virtual animal) with non-expressiveness (without emotional facial expressions), artificial (virtual robotic animal) with expressiveness (emotional facial expressions), and artificial (virtual robotic animal) with non-expressiveness (without emotional facial expressions). The control condition contained a baseline amorphous game character. 100 participants between 18 to 29 years old (M=22.47) were randomly assigned to one of five experimental groups. Participants originated from several countries: Aruba (1), China (1), Colombia (3), Finland (1), France (1), Germany (1), Greece (2), Iceland (1), India (1), Iran (1), Ireland (1), Italy (3), Jamaica (1), Latvia (1), Morocco (3), Netherlands (70), Poland (1), Romania (2), Spain (1), Thailand (1), Turkey (1), United States (1), and Vietnam (1). Results. We found that congruence in appearance and facial expressions of virtual animals (artificial + non-expressive and natural + expressive) leads to higher levels of self-reported situational empathy and immersion of players in a simulated environment compared to incongruent appearance and facial expressions. Conclusions. The results of this investigation showed an interaction effect between artificial/natural body appearance and facial expressiveness of a virtual character’s appearance. The evidence from this study suggests that the appearance of the virtual animal has an important influence on user experience.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3419
Author(s):  
Shan Zhang ◽  
Zihan Yan ◽  
Shardul Sapkota ◽  
Shengdong Zhao ◽  
Wei Tsang Ooi

While numerous studies have explored using various sensing techniques to measure attention states, moment-to-moment attention fluctuation measurement is unavailable. To bridge this gap, we applied a novel paradigm in psychology, the gradual-onset continuous performance task (gradCPT), to collect the ground truth of attention states. GradCPT allows for the precise labeling of attention fluctuation on an 800 ms time scale. We then developed a new technique for measuring continuous attention fluctuation, based on a machine learning approach that uses the spectral properties of EEG signals as the main features. We demonstrated that, even using a consumer grade EEG device, the detection accuracy of moment-to-moment attention fluctuations was 73.49%. Next, we empirically validated our technique in a video learning scenario and found that our technique match with the classification obtained through thought probes, with an average F1 score of 0.77. Our results suggest the effectiveness of using gradCPT as a ground truth labeling method and the feasibility of using consumer-grade EEG devices for continuous attention fluctuation detection.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
An Zheng ◽  
Michael Lamkin ◽  
Yutong Qiu ◽  
Kevin Ren ◽  
Alon Goren ◽  
...  

Abstract Background A major challenge in evaluating quantitative ChIP-seq analyses, such as peak calling and differential binding, is a lack of reliable ground truth data. Accurate simulation of ChIP-seq data can mitigate this challenge, but existing frameworks are either too cumbersome to apply genome-wide or unable to model a number of important experimental conditions in ChIP-seq. Results We present ChIPs, a toolkit for rapidly simulating ChIP-seq data using statistical models of key experimental steps. We demonstrate how ChIPs can be used for a range of applications, including benchmarking analysis tools and evaluating the impact of various experimental parameters. ChIPs is implemented as a standalone command-line program written in C++ and is available from https://github.com/gymreklab/chips. Conclusions ChIPs is an efficient ChIP-seq simulation framework that generates realistic datasets over a flexible range of experimental conditions. It can serve as an important component in various ChIP-seq analyses where ground truth data are needed.


2015 ◽  
Vol 15 (05) ◽  
pp. 1550085 ◽  
Author(s):  
MADHURI TASGAONKAR ◽  
MADHURI KHAMBETE

Diabetes affects retinal structure of a diabetic patient by generating various lesions. Early detection of these lesions can avoid the loss of vision. Automation of detection process can be made easily feasible to masses by the use of fundus imaging. Detection of exudates is significant in diabetic retinopathy (DR) as they are earlier signs and can cause blindness. Finding the exact location as well as correct number of exudates play vital role in the overall treatment of a patient. This paper presents an algorithm for automatic detection of exudates for DR. The algorithm combines the advantages of supervised and unsupervised techniques. It uses fuzzy-C means (FCM) segmentation on coarse level and mahalanobis metric for finer classification of segmented pixels. Mahalanobis criterion gives significance to most relevant features and thus proves a better classifier. The results are validated using DIARETDB0 and DIARETDB1 databases and the ground truth provided with it. This evaluation provided 95.77% detection accuracy.


2003 ◽  
Vol 13 ◽  
pp. S296
Author(s):  
P. Shaw ◽  
K. Kucharska-Pietura ◽  
T. Russell ◽  
F. Zelaya ◽  
E. Amaro ◽  
...  

2018 ◽  
Author(s):  
◽  
Sanchita Gargya

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] An extensive literature on the influence of emotion on memory asserts that memory for emotional information is remembered better than information lacking emotional content (Kensinger, 2009; Talmi et al., 2007; for review see Hamann, 2001). While decades of research have agreed upon memory advantages for emotional versus neutral information, research studying the impact of emotion on memory for associated details has shown differential effects of emotion on associated neutral details (Erk et al., 2003; Righi et al., 2015; Steinmetz et al., 2015). Using emotional-neutral stimulus pairs, the current set of experiments present novel findings from aging perspective to systematically explore the impact of embedded emotional information on associative memory representation of associated neutral episodic memory details. To accomplish this, three experiments were conducted. In all three experiments, younger and older participants were shown three types of emotional faces (happy, sad, and neutral) along with names. The first experiment investigated whether associative instructions and repetition of face-name pairs influence and promote formation of implicit emotional face-name associations. Using intentional and incidental instructions to encode face-name associations, in Experiment 2 and 3, respectively, participants' memory for whether names, shown with different facial expressions, can trigger emotional content of a study episode in the absence of the original emotional context at test, was assessed. Results indicate that while both younger and older adults show that names are integrated better with happy facial expressions than with sad expressions, older adults fail to show a benefit for associating a name with a happy emotional expression in the absence of associative encoding instructions. Overall, these results suggest that happy facial expressions can be implicitly learnt with or spilled over to associated neutral episodic details, like names. However, this integration is accomplished by older adults only under instructions to form face-name association.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3415 ◽  
Author(s):  
Jinpeng Zhang ◽  
Jinming Zhang ◽  
Shan Yu

In the image object detection task, a huge number of candidate boxes are generated to match with a relatively very small amount of ground-truth boxes, and through this method the learning samples can be created. But in fact the vast majority of the candidate boxes do not contain valid object instances and should be recognized and rejected during the training and evaluation of the network. This leads to extra high computation burden and a serious imbalance problem between object and none-object samples, thereby impeding the algorithm’s performance. Here we propose a new heuristic sampling method to generate candidate boxes for two-stage detection algorithms. It is generally applicable to the current two-stage detection algorithms to improve their detection performance. Experiments on COCO dataset showed that, relative to the baseline model, this new method could significantly increase the detection accuracy and efficiency.


2014 ◽  
Vol 19 (1) ◽  
pp. 15-22 ◽  
Author(s):  
Anna J Karmann ◽  
Stefan Lautenbacher ◽  
Florian Bauer ◽  
Miriam Kunz

BACKGROUND: Facial responses to pain are believed to be an act of communication and, as such, are likely to be affected by the relationship between sender and receiver.OBJECTIVES: To investigate this effect by examining the impact that variations in communicative relations (from being alone to being with an intimate other) have on the elements of the facial language used to communicate pain (types of facial responses), and on the degree of facial expressiveness.METHODS: Facial responses of 126 healthy participants to phasic heat pain were assessed in three different social situations: alone, but aware of video recording; in the presence of an experimenter; and in the presence of an intimate other. Furthermore, pain catastrophizing and sex (of participant and experimenter) were considered as additional influences.RESULTS: Whereas similar types of facial responses were elicited independent of the relationship between sender and observer, the degree of facial expressiveness varied significantly, with increased expressiveness occurring in the presence of the partner. Interestingly, being with an experimenter decreased facial expressiveness only in women. Pain catastrophizing and the sex of the experimenter exhibited no substantial influence on facial responses.CONCLUSION: Variations in communicative relations had no effect on the elements of the facial pain language. The degree of facial expressiveness, however, was adapted to the relationship between sender and observer. Individuals suppressed their facial communication of pain toward unfamiliar persons, whereas they overtly displayed it in the presence of an intimate other. Furthermore, when confronted with an unfamiliar person, different situational demands appeared to apply for both sexes.


2011 ◽  
Vol 11 (12) ◽  
pp. 3135-3149 ◽  
Author(s):  
G. Panegrossi ◽  
R. Ferretti ◽  
L. Pulvirenti ◽  
N. Pierdicca

Abstract. The representation of land-atmosphere interactions in weather forecast models has a strong impact on the Planetary Boundary Layer (PBL) and, in turn, on the forecast. Soil moisture is one of the key variables in land surface modelling, and an inadequate initial soil moisture field can introduce major biases in the surface heat and moisture fluxes and have a long-lasting effect on the model behaviour. Detecting the variability of soil characteristics at small scales is particularly important in mesoscale models because of the continued increase of their spatial resolution. In this paper, the high resolution soil moisture field derived from ENVISAT/ASAR observations is used to derive the soil moisture initial condition for the MM5 simulation of the Tanaro flood event of April 2009. The ASAR-derived soil moisture field shows significantly drier conditions compared to the ECMWF analysis. The impact of soil moisture on the forecast has been evaluated in terms of predicted precipitation and rain gauge data available for this event have been used as ground truth. The use of the drier, highly resolved soil moisture content (SMC) shows a significant impact on the precipitation forecast, particularly evident during the early phase of the event. The timing of the onset of the precipitation, as well as the intensity of rainfall and the location of rain/no rain areas, are better predicted. The overall accuracy of the forecast using ASAR SMC data is significantly increased during the first 30 h of simulation. The impact of initial SMC on the precipitation has been related to the change in the water vapour field in the PBL prior to the onset of the precipitation, due to surface evaporation. This study represents a first attempt to establish whether high resolution SAR-based SMC data might be useful for operational use, in anticipation of the launch of the Sentinel-1 satellite.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yusra Khalid Bhatti ◽  
Afshan Jamil ◽  
Nudrat Nida ◽  
Muhammad Haroon Yousaf ◽  
Serestina Viriri ◽  
...  

Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.


Sign in / Sign up

Export Citation Format

Share Document