Somatic Semiotics: Emotion and the Human Face in the Sagas andÞættirof Icelanders

Traditio ◽  
2014 ◽  
Vol 69 ◽  
pp. 125-145
Author(s):  
Kirsten Wolf

The human face has the capacity to generate expressions associated with a wide range of affective states. Despite the fact that there are few words to describe human facial behaviors, the facial muscles allow for more than a thousand different facial appearances. Some examples of feelings that can be expressed are anger, concentration, contempt, excitement, nervousness, and surprise. Regardless of culture or language, the same expressions are associated with the same emotions and vary only in intensity. Using modern psychological analyses as a point of departure, this essay examines descriptions of human facial expressions as well as such bodily “symptoms” as flushing, turning pale, and weeping in Old Norse-Icelandic literature. The aim is to analyze the manner in which facial signs are used as a means of non-verbal communication to convey the impression of an individual's internal state to observers. More specifically, this essay seeks to determine when and why characters in these works are described as expressing particular facial emotions and, especially, the range of emotions expressed. The Sagas andþættirof Icelanders are in the forefront of the analysis and yield well over one hundred references to human facial expression and color. The examples show that through gaze, smiling, weeping, brows that are raised or knitted, and coloration, the Sagas andþættirof Icelanders tell of happiness or amusement, pleasant and unpleasant surprise, fear, anger, rage, sadness, interest, concern, and even mixed emotions for which language has no words. The Sagas andþættirof Icelanders may be reticent in talking about emotions and poor in emotional vocabulary, but this poverty is compensated for by making facial expressions signifiers of emotion. This essay makes clear that the works are less emotionally barren than often supposed. It also shows that our understanding of Old Norse-Icelandic “somatic semiotics” may well depend on the universality of facial expressions and that culture-specific “display rules” or “elicitors” are virtually nonexistent.

2005 ◽  
Vol 16 (3) ◽  
pp. 184-189 ◽  
Author(s):  
Marie L. Smith ◽  
Garrison W. Cottrell ◽  
FrédéAric Gosselin ◽  
Philippe G. Schyns

This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.


2017 ◽  
Vol 49 (1) ◽  
pp. 130-148 ◽  
Author(s):  
Xia Fang ◽  
Disa A. Sauter ◽  
Gerben A. Van Kleef

Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression.


2019 ◽  
Vol 44 (1) ◽  
pp. 90-113 ◽  
Author(s):  
Ming-Yi Chen ◽  
Ching-I Teng ◽  
Kuo-Wei Chiou

Purpose Online reviews are increasingly available for a wide range of products and services in e-commerce. Most consumers rely heavily on online reviews when making purchase decisions, so an important topic is that of understanding what makes some online reviews helpful in the eyes of consumers. Researchers have demonstrated the benefits of the presence of customer reviews to an online retailer, however, few studies have investigated how images in review content and the facial expressions of reviewers’ avatars influence the judgment of online review helpfulness. This study draws on self-construal theory, attribution theory and affect-as-information theory to empirically test a model of the interaction effects of images in review content and the facial expressions of reviewers’ avatars on online review helpfulness. Furthermore, the purpose of this paper is to identify an underlying mechanism of causal attribution toward store performance on the above effects. Design/methodology/approach This study conducted two online experiments. Study 1 is a 2 (images in review content: one person with a product vs a group of people with a product) ×2 (facial expression of the reviewer’s avatar: happy vs angry) between-subjects design. Study 2 is a 3 (image: product alone vs one person with a product vs a group of people with a product) ×2 (facial expression of the reviewer’s avatar: happy vs angry) ×3 (valence of the review: positive vs negative vs neutral) between-subjects design. Findings The results indicate that when consumers were exposed to a happy-looking avatar, they were likely to express higher perceptions of online review helpfulness in response to an image showing a group of people in a restaurant than they would for an image of one person in the same situation. In contrast, when consumers were exposed to an angry-looking avatar, their perceptions of online review helpfulness did not differ in response to images of either a group of people or of one person. Furthermore, cause attribution toward store performance mediated the interaction between images in content of reviews and the facial expression of a reviewer’s avatar on the perceptions of online review helpfulness. Practical implications The authors provide insights into how to develop guidelines on how online reviews should be written so that readers perceive them to be helpful, and how to design effective reward mechanisms for customer feedback. Originality/value Compared with previous studies, this study provides further contributions in three ways. First, it contributes to the literature on review content by showing which images in reviews are deemed to be helpful. Second, it extends previous findings from the literature relating to online peer reviews by demonstrating the importance of facial expressions in reviewers’ avatars (i.e. happy vs angry) when explaining helpfulness, rather than the strength of purchase intent. Third, this study contributes by further highlighting a novel mechanism which shows that a causal attribution toward store performance motivates the perceptions of online review helpfulness.


2007 ◽  
Vol 19 (3) ◽  
pp. 315-323 ◽  
Author(s):  
Ayako Watanabe ◽  
◽  
Masaki Ogino ◽  
Minoru Asada ◽  
◽  
...  

Sympathy is a key issue in interaction and communication between robots and their users. In developmental psychology, intuitive parenting is considered the maternal scaffolding upon which children develop sympathy when caregivers mimic or exaggerate the child’s emotional facial expressions [1]. We model human intuitive parenting using a robot that associates a caregiver’s mimicked or exaggerated facial expressions with the robot’s internal state to learn a sympathetic response. The internal state space and facial expressions are defined using psychological studies and change dynamically in response to external stimuli. After learning, the robot responds to the caregiver’s internal state by observing human facial expressions. The robot then expresses its own internal state facially if synchronization evokes a response to the caregiver’s internal state.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jennifer M. B. Fugate ◽  
Courtny L. Franco

Emoji faces, which are ubiquitous in our everyday communication, are thought to resemble human faces and aid emotional communication. Yet, few studies examine whether emojis are perceived as a particular emotion and whether that perception changes based on rendering differences across electronic platforms. The current paper draws upon emotion theory to evaluate whether emoji faces depict anatomical differences that are proposed to differentiate human depictions of emotion (hereafter, “facial expressions”). We modified the existing Facial Action Coding System (FACS) (Ekman and Rosenberg, 1997) to apply to emoji faces. An equivalent “emoji FACS” rubric allowed us to evaluate two important questions: First, Anatomically, does the same emoji face “look” the same across platforms and versions? Second, Do emoji faces perceived as a particular emotion category resemble the proposed human facial expression for that emotion? To answer these questions, we compared the anatomically based codes for 31 emoji faces across three platforms and two version updates. We then compared those codes to the proposed human facial expression prototype for the emotion perceived within the emoji face. Overall, emoji faces across platforms and versions were not anatomically equivalent. Moreover, the majority of emoji faces did not conform to human facial expressions for an emotion, although the basic anatomical codes were shared among human and emoji faces. Some emotion categories were better predicted by the assortment of anatomical codes than others, with some individual differences among platforms. We discuss theories of emotion that help explain how emoji faces are perceived as an emotion, even when anatomical differences are not always consistent or specific to an emotion.


Author(s):  
Samta Jain Goya ◽  
Dr. Arvind K. Upadhyay ◽  
Dr. R. S. Jadon ◽  
Rajeev Goyal

This paper introduces facial expression detection method which is based on facial’s selected feature and optimized those selected features. The study says that human face generally faced generally consist of skin color, texture shape and size of face in this paper we study skin color and texture of human face .This process consist two steps for the same. In first known as detection of expression which uses PFEF (partial feature extension function) and in second, for optimization we used TLBO algorithm is basically a population base searching technics. Also uses soft computation technics because we cannot actual and accurate for human related activity. Varieties of technic are used for the same purpose this as per use hybrid approach to get better result.


Author(s):  
Dhruv Piyush Parikh

Abstract: Our world today is driven by machines of various complexities. From a basic one like a computer to a highly complex humanoid robot, everything is a product of human intelligence. A lot of industries are being benefited from such new technologies. Facial Expression Recognition is one of these technologies. It has a wide range of applications and is an area that is constantly evolving. The analogy behind it is, when we gaze at someone, the eyes send signals to the brain. The face patterns of that specific person are carried by these messages. These patterns are then compared to those stored in the brain's memory. Inspired by such innovations, our research collects human expressions and analyses their emotions using our vast dataset, offering some necessary strategies to change their facial expressions. Due to the competitive environment, the youth of our generation has been inclined to a lot of mental health problems such as anxiety and depression. Our generation's youth has been predisposed to a variety of mental health issues. Our idea attempts to provide a relaxing atmosphere to a person based on his or her facial expressions. Keywords: Facial Expression, Face Recognition, Python, PyWhatkit, OpenCV.


Author(s):  
Maja Pantic

The human face is involved in an impressive variety of different activities. It houses the majority of our sensory apparatus: eyes, ears, mouth, and nose, allowing the bearer to see, hear, taste, and smell. Apart from these biological functions, the human face provides a number of signals essential for interpersonal communication in our social life. The face houses the speech production apparatus and is used to identify other members of the species, to regulate the conversation by gazing or nodding, and to interpret what has been said by lip reading. It is our direct and naturally preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression (Lewis & Haviland-Jones, 2000). Personality, attractiveness, age, and gender can also be seen from someone’s face. Thus the face is a multisignal sender/receiver capable of tremendous flexibility and specificity. In general, the face conveys information via four kinds of signals listed in Table 1. Automating the analysis of facial signals, especially rapid facial signals, would be highly beneficial for fields as diverse as security, behavioral science, medicine, communication, and education. In security contexts, facial expressions play a crucial role in establishing or detracting from credibility. In medicine, facial expressions are the direct means to identify when specific mental processes are occurring. In education, pupils’ facial expressions inform the teacher of the need to adjust the instructional message. As far as natural user interfaces between humans and computers (PCs/robots/machines) are concerned, facial expressions provide a way to communicate basic information about needs and demands to the machine. In fact, automatic analysis of rapid facial signals seem to have a natural place in various vision subsystems and vision-based interfaces (face-for-interface tools), including automated tools for gaze and focus of attention tracking, lip reading, bimodal speech processing, face/visual speech synthesis, face-based command issuing, and facial affect processing. Where the user is looking (i.e., gaze tracking) can be effectively used to free computer users from the classic keyboard and mouse. Also, certain facial signals (e.g., a wink) can be associated with certain commands (e.g., a mouse click) offering an alternative to traditional keyboard and mouse commands. The human capability to “hear” in noisy environments by means of lip reading is the basis for bimodal (audiovisual) speech processing that can lead to the realization of robust speech-driven interfaces. To make a believable “talking head” (avatar) representing a real person, tracking the person’s facial signals and making the avatar mimic those using synthesized speech and facial expressions is compulsory. The human ability to read emotions from someone’s facial expressions is the basis of facial affect processing that can lead to expanding user interfaces with emotional communication and, in turn, to obtaining a more flexible, adaptable, and natural affective interfaces between humans and machines. More specifically, the information about when the existing interaction/processing should be adapted, the importance of such an adaptation, and how the interaction/ reasoning should be adapted, involves information about how the user feels (e.g., confused, irritated, tired, interested). Examples of affect-sensitive user interfaces are still rare, unfortunately, and include the systems of Lisetti and Nasoz (2002), Maat and Pantic (2006), and Kapoor, Burleson, and Picard (2007). It is this wide range of principle driving applications that has lent a special impetus to the research problem of automatic facial expression analysis and produced a surge of interest in this research topic.


2009 ◽  
Vol 2009 ◽  
pp. 1-13 ◽  
Author(s):  
Ali Arya ◽  
Steve DiPaola ◽  
Avi Parush

This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”


1970 ◽  
Vol 13 (3) ◽  
pp. 293-309
Author(s):  
Senka Kostić ◽  
Tijana Todić Jakšić ◽  
Oliver Tošković

Results of previous studies point to the importance of different face parts for certain emotion recognition, and also show that emotions are better recognized in photographs than in caricatures of faces. Therefore, the aim of the study was to examine the accuracy of recognizing facial expression of emotions in relation to the type of emotion and the type of visual presentations. Stimuli contained facial expressions, shown as a photograph, face drawing, or as an emoticon. The task for the participant was to click on the emotion he thought was shown on the stimulus. As factors, the type of displayed emotion varied (happiness, sorrow, surprise, anger, disgust, fear), as well as the type of visual presentation (photo of a human face, a drawing of a human face and an emoticon). As the dependent variable, we used the number of accurately recognized facial expressions in all 18 situations. The results showed that there is an interaction of the type of emotion being evaluated and the type of visual presentation, F(10; 290) = 10.55, p < .01, ŋ2 = .27. The facial expression of fear was most accurately assessed in the drawing of the human face. Emotion of sorrow was most accurately recognized in the assessment of emoticon, and the expression of disgust was recognized worst on the emoticon. Other expressions of emotions were equally well assessed independently of the type of visual presentation. The type of visual presentation has proven to be important for recognizing some emoticons, but not for all of them. 


Sign in / Sign up

Export Citation Format

Share Document