scholarly journals Reading Food Experiences from the Face: Effects of Familiarity and Branding of Soy Sauce on Facial Expressions and Video-Based RPPG Heart Rate

Foods ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 1345
Author(s):  
Rene A. de Wijk ◽  
Shota Ushiama ◽  
Meeke Ummels ◽  
Patrick Zimmerman ◽  
Daisuke Kaneko ◽  
...  

Food experiences are not only driven by the food’s intrinsic properties, such as its taste, texture, and aroma, but also by extrinsic properties such as visual brand information and the consumers’ previous experiences with the foods. Recent developments in automated facial expression analysis and heart rate detection based on skin color changes (remote photoplethysmography or RPPG) allow for the monitoring of food experiences based on video images of the face. RPPG offers the possibility of large-scale non-laboratory and web-based testing of food products. In this study, results from the video-based analysis were compared to the more conventional tests (scores of valence and arousal using Emojis and photoplethysmography heart rate (PPG)). Forty participants with varying degrees of familiarity with soy sauce were presented with samples of rice and three commercial soy sauces with and without brand information. The results showed that (1) liking and arousal were affected primarily by the specific tastes, but not by branding and familiarity. In contrast, facial expressions were affected by branding and familiarity, and to a lesser degree by specific tastes. (2) RPPG heart rate and PPG both showed effects of branding and familiarity. However, RPPG heart rate needs further development because it underestimated the heart rate compared to PPG and was less sensitive to changes over time and with activity (viewing of brand information and tasting). In conclusion, this study suggests that recording of facial expressions and heart rates may no longer be limited to laboratories but can be done remotely using video images, which offers opportunities for large-scale testing in consumer science.

2021 ◽  
Vol 2 ◽  
Author(s):  
Rene A. de Wijk ◽  
Shota Ushiama ◽  
Meeke J. Ummels ◽  
Patrick H. Zimmerman ◽  
Daisuke Kaneko ◽  
...  

Food experiences can be summarized along two main dimensions: valence and arousal, which can be measured explicitly with subjective ratings or implicitly with physiological and behavioral measures. Food experiences are not only driven by the food's intrinsic properties, such as its taste, texture, and aroma, but also by extrinsic properties such as brand information and the consumers' previous experiences with the foods. In this study, valence and arousal to intrinsic and extrinsic properties of soy sauce were measured in consumers that varied in their previous experience with soy sauce, using a combination of explicit (scores and emojis), implicit (heart rate and skin conductance), and behavioral measures (facial expressions). Forty participants, high- and low-frequency users, were presented with samples of rice and three commercial soy sauces without and with brand information that either matched or non-matched the taste of the soy sauce. In general, skin conductance and facial expressions showed relatively low arousal during exposure to the brand name and again lowest arousal during tasting. Heart rate was lowest during exposure to the brand name and increased during tasting probably resulting from the motor activity during chewing. Furthermore, the results showed that explicit liking and arousal scores were primarily affected by the taste of the specific soy sauce and by the participants' previous experience with soy sauces. These scores were not affected by branding information. In contrast, facial expressions, skin conductance, and heart rate were primarily affected by (1) the participants' level of experience with soy sauce, (2) whether or not branding information was provided, and (3) whether or not the branding information matched with the taste. In conclusion, this study suggests that liking scores may be most sensitive to the food's intrinsic taste properties, whereas implicit measures and facial expressions may be most sensitive to extrinsic properties such as brand information. All measures were affected by the consumers' previous food experiences.


2019 ◽  
Vol 46 (1) ◽  
pp. 51-63 ◽  
Author(s):  
Christopher A. Thorstenson ◽  
Adam D. Pazda ◽  
Andrew J. Elliot

Typical human color vision is trichromatic, on the basis that we have three distinct classes of photoreceptors. A recent evolutionary account posits that trichromacy facilitates detecting subtle skin color changes to better distinguish important social states related to proceptivity, health, and emotion in others. Across two experiments, we manipulated the facial color appearance of images consistent with a skin blood perfusion response and asked participants to evaluate the perceived attractiveness, health, and anger of the face (trichromatic condition). We additionally simulated what these faces would look like for three dichromatic conditions (protanopia, deuteranopia, tritanopia). The results demonstrated that flushed (relative to baseline) faces were perceived as more attractive, healthy, and angry in the trichromatic and tritanopia conditions, but not in the protanopia and deuteranopia conditions. The results provide empirical support for the social perception account of trichromatic color vision evolution and lead to systematic predictions of social perception based on ecological social perception theory.


2021 ◽  
Author(s):  
Jianxin Wang ◽  
Craig Poskanzer ◽  
Stefano Anzellotti

Facial expressions are critical in our daily interactions. Studying how humans recognize dynamic facial expressions is an important area of research in social perception, but advancements are hampered by the difficulty of creating well-controlled stimuli. Research on the perception of static faces has made significant progress thanks to techniques that make it possible to generate synthetic face stimuli. However, synthetic dynamic expressions are more difficult to generate; methods that yield realistic dynamics typically rely on the use of infrared markers applied on the face, making it expensive to create datasets that include large numbers of different expressions. In addition, the use of markers might interfere with facial dynamics. In this paper, we contribute a new method to generate large amounts of realistic and well-controlled facial expression videos. We use a deep convolutional neural network with attention and asymmetric loss to extract the dynamics of action units from videos, and demonstrate that this approach outperforms a baseline model based on convolutional neural networks without attention on the same stimuli. Next, we develop a pipeline to use the action unit dynamics to render realistic synthetic videos. This pipeline makes it possible to generate large scale naturalistic and controllable facial expression datasets to facilitate future research in social cognitive science.


2018 ◽  
Vol 115 (14) ◽  
pp. 3581-3586 ◽  
Author(s):  
Carlos F. Benitez-Quiroz ◽  
Ramprakash Srinivasan ◽  
Aleix M. Martinez

Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1098
Author(s):  
Mohammad Farukh Hashmi ◽  
B. Kiran Kumar Ashish ◽  
Vivek Sharma ◽  
Avinash G. Keskar ◽  
Neeraj Dhanraj Bokde ◽  
...  

Facial micro expressions are brief, spontaneous, and crucial emotions deep inside the mind, reflecting the actual thoughts for that moment. Humans can cover their emotions on a large scale, but their actual intentions and emotions can be extracted at a micro-level. Micro expressions are organic when compared with macro expressions, posing a challenge to both humans, as well as machines, to identify. In recent years, detection of facial expressions are widely used in commercial complexes, hotels, restaurants, psychology, security, offices, and education institutes. The aim and motivation of this paper are to provide an end-to-end architecture that accurately detects the actual expressions at the micro-scale features. However, the main research is to provide an analysis of the specific parts that are crucial for detecting the micro expressions from a face. Many states of the art approaches have been trained on the micro facial expressions and compared with our proposed Lossless Attention Residual Network (LARNet) approach. However, the main research on this is to provide analysis on the specific parts that are crucial for detecting the micro expressions from a face. Many CNN-based approaches extracts the features at local level which digs much deeper into the face pixels. However, the spatial and temporal information extracted from the face is encoded in LARNet for a feature fusion extraction on specific crucial locations, such as nose, cheeks, mouth, and eyes regions. LARNet outperforms the state-of-the-art methods with a slight margin by accurately detecting facial micro expressions in real-time. Lastly, the proposed LARNet becomes accurate and better by training with more annotated data.


2013 ◽  
Vol 6 (1) ◽  
pp. 36-43 ◽  
Author(s):  
Ai Chi ◽  
Li Yuwei

Coal body is a type of fractured rock mass in which lots of cleat fractures developed. Its mechanical properties vary with the parametric variation of coal rock block, face cleat and butt cleat. Based on the linear elastic theory and displacement equivalent principle and simplifying the face cleat and butt cleat as multi-bank penetrating and intermittent cracks, the model was established to calculate the elastic modulus and Poisson's ratio of coal body combined with cleat. By analyzing the model, it also obtained the influence of the parameter variation of coal rock block, face cleat and butt cleat on the elastic modulus and Poisson's ratio of the coal body. Study results showed that the connectivity rate of butt cleat and the distance between face cleats had a weak influence on elastic modulus of coal body. When the inclination of face cleat was 90°, the elastic modulus of coal body reached the maximal value and it equaled to the elastic modulus of coal rock block. When the inclination of face cleat was 0°, the elastic modulus of coal body was exclusively dependent on the elastic modulus of coal rock block, the normal stiffness of face cleat and the distance between them. When the distance between butt cleats or the connectivity rate of butt cleat was fixed, the Poisson's ratio of the coal body initially increased and then decreased with increasing of the face cleat inclination.


Author(s):  
Manpreet Kaur ◽  
Jasdev Bhatti ◽  
Mohit Kumar Kakkar ◽  
Arun Upmanyu

Introduction: Face Detection is used in many different steams like video conferencing, human-computer interface, in face detection, and in the database management of image. Therefore, the aim of our paper is to apply Red Green Blue ( Methods: The morphological operations are performed in the face region to a number of pixels as the proposed parameter to check either an input image contains face region or not. Canny edge detection is also used to show the boundaries of a candidate face region, in the end, the face can be shown detected by using bounding box around the face. Results: The reliability model has also been proposed for detecting the faces in single and multiple images. The results of the experiments reflect that the algorithm been proposed performs very well in each model for detecting the faces in single and multiple images and the reliability model provides the best fit by analyzing the precision and accuracy. Moreover Discussion: The calculated results show that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images. Also, the evaluated results by this paper provides the better testing strategies that helps to develop new techniques which leads to an increase in research effectiveness. Conclusion: The calculated value of all parameters is helpful for proving that the proposed algorithm has been performed very well in each model for detecting the face by using a bounding box around the face in single as well as multiple images. The precision and accuracy of all three models are analyzed through the reliability model. The comparison calculated in this paper reflects that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images.


Author(s):  
Richard Gowan

During Ban Ki-moon’s tenure, the Security Council was shaken by P5 divisions over Kosovo, Georgia, Libya, Syria, and Ukraine. Yet it also continued to mandate and sustain large-scale peacekeeping operations in Africa, placing major burdens on the UN Secretariat. The chapter will argue that Ban initially took a cautious approach to controversies with the Council, and earned a reputation for excessive passivity in the face of crisis and deference to the United States. The second half of the chapter suggests that Ban shifted to a more activist pressure as his tenure went on, pressing the Council to act in cases including Côte d’Ivoire, Libya, and Syria. The chapter will argue that Ban had only a marginal impact on Council decision-making, even though he made a creditable effort to speak truth to power over cases such as the Central African Republic (CAR), challenging Council members to live up to their responsibilities.


Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


Sign in / Sign up

Export Citation Format

Share Document