The Auditory Kuleshov Effect: Multisensory Integration in Movie Editing

Perception ◽  
2016 ◽  
Vol 46 (5) ◽  
pp. 624-631 ◽  
Author(s):  
Andreas M. Baranowski ◽  
H. Hecht

Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants’ emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.

2012 ◽  
Vol 25 (0) ◽  
pp. 46-47
Author(s):  
Kazumichi Matsumiya

Adaptation to a face belonging to a facial category, such as expression, causes a subsequently neutral face to be perceived as belonging to an opposite facial category. This is referred to as the face aftereffect (FAE) (Leopold et al., 2001; Rhodes et al., 2004; Webster et al., 2004). The FAE is generally thought of as being a visual phenomenon. However, recent studies have shown that humans can haptically recognize a face (Kilgour and Lederman, 2002; Lederman et al., 2007). Here, I investigated whether FAEs could occur in haptic perception of faces. Three types of facial expressions (happy, sad and neutral) were generated using a computer-graphics software, and three-dimensional masks of these faces were made from epoxy-cured resin for use in the experiments. An adaptation facemask was positioned on the left side of a table in front of the participant, and a test facemask was placed on the right. During adaptation, participants haptically explored the adaptation facemask with their eyes closed for 20 s, after which they haptically explored the test facemask for 5 s. Participants were then requested to classify the test facemask as either happy or sad. The experiment was performed under two adaptation conditions: (1) with adaptation to a happy facemask and (2) with adaptation to a sad facemask. In both cases, the expression of the test facemask was neutral. The results indicate that adaptation to a haptic face that belongs to a specific facial expression causes a subsequently touched neutral face to be perceived as having the opposite facial expression, suggesting that FAEs can be observed in haptic perception of faces.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2005 ◽  
Vol 16 (3) ◽  
pp. 184-189 ◽  
Author(s):  
Marie L. Smith ◽  
Garrison W. Cottrell ◽  
FrédéAric Gosselin ◽  
Philippe G. Schyns

This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.


Author(s):  
Sanjay Kumar Singh ◽  
V. Rastogi ◽  
S. K. Singh

Pain, assumed to be the fifth vital sign, is an important symptom that needs to be adequately assessed in heath care. The visual changes reflected on the face of a person in pain may be apparent for only a few seconds and occur instinctively. Tracking these changes is a difficult and time-consuming process in a clinical setting. This is why it is motivating researchers and experts from medical, psychology and computer fields to conduct inter-disciplinary research in capturing facial expressions. This chapter contains a comprehensive review of technologies in the study of facial expression along with its application in pain assessment. The facial expressions of pain in children's (0-2 years) and in non-communicative patients need to be recognized as they are of utmost importance for proper diagnosis. Well designed computerized methodologies would streamline the process of patient assessment, increasing its accessibility to physicians and improving quality of care.


Author(s):  
Peggy Mason

Tracts descending from motor control centers in the brainstem and cortex target motor interneurons and in select cases motoneurons. The mechanisms and constraints of postural control are elaborated and the effect of body mass on posture discussed. Feed-forward reflexes that maintain posture during standing and other conditions of self-motion are described. The role of descending tracts in postural control and the pathological posturing is described. Pyramidal (corticospinal and corticobulbar) and extrapyramidal control of body and face movements is contrasted. Special emphasis is placed on cortical regions and tracts involved in deliberate control of facial expression; these pathways are contrasted with mechanisms for generating emotional facial expressions. The signs associated with lesions of either motoneurons or motor control centers are clearly detailed. The mechanisms and presentation of cerebral palsy are described. Finally, understanding how pre-motor cortical regions generate actions is used to introduce apraxia, a disorder of action.


Behaviour ◽  
1964 ◽  
Vol 22 (3-4) ◽  
pp. 167-192 ◽  
Author(s):  
Niels Bolwig

AbstractIn this report of an unfinished study of the evolution of facial expressions the author draws a brief comparison between the most important facial muscles of various primates and of two carnivores, the suricate and the dog. Before discussing the expressions, definitions of the various elementary emotions are given and the criteria from which the author judges the emotional condition of the animals. The main conclusions reached from the observations are:- 1. Certain basic rules govern the facial expressions of the animals studied. 2. Joy and happiness are expressed by a general lifting of the face and a tightening of the upper lip. The expression originates from preparation for a play-bite. The posture has become completely ritualised in man. 3. Unhappiness expresses itself by a lowering of the face. In horror there is a general tension of the facial muscles and the mouth tends to open while the animal screams. In sadness the animal tends to become less active. 4. Anger is recognisable from a tightening of the facial muscles, particularly those around the mouth in preparation for a hard bite. 5. Threat varies in expression but it contains components of anger and fear. 6. Love and affection find expression through such actions as lipsmacking, love-biting, sucking and kissing. The oral caressing has its origin in the juvenile sucking for comfort. 7. Concentration is not an emotion but it usually shows itself by a tension of the facial muscles. 8. There is a similarity between the two carnivores under discussion and some of the primates. A common pattern of the facial muscles of the suricate and the lemur indicate a common ancestry and brings the two animals to the same level in their ability to express their emotions. The dog, although very different from the monkey in its facial musculature nevertheless resembles it in its mode of expression. This feature seems related to similarities in their biology which have been facilitated by the development of a bifocal vision.


2012 ◽  
Vol 60 (4) ◽  
pp. 419-429 ◽  
Author(s):  
Brian A. Silvey

The purpose of this study was to explore whether conductor facial expression affected the expressivity ratings assigned to music excerpts by high school band students. Three actors were videotaped while portraying approving, neutral, and disapproving facial expressions. Each video was duplicated twice and then synchronized with one of three professional wind ensemble recordings. Participants ( N = 133) viewed nine 1-min videos of varying facial expressions, actors, and excerpts and rated each ensemble’s expressivity on a 10-point rating scale. Results of a one-way repeated measures ANOVA indicated that conductor facial expression significantly affected ratings of ensemble expressivity ( p < .001, partial η2 = .15). Post hoc comparisons revealed that participants’ ensemble expressivity ratings were significantly higher for excerpts featuring approving facial expressions than for either neutral or disapproving expressions. Participants’ mean ratings were lowest for neutral facial expression excerpts, indicating that an absence of facial affect influenced evaluations of ensemble expressivity most negatively.


Facial expressions convey verbal indications that play an important role in interpersonal relationships. Despite the fact that people immediately perceive facial expressions for all intents and purposes, solid expression recognition by machine is still a challenge. From the point of view of automatic recognition, The facial expression may included the figurations of the facial parts and their spatial relationships or changes in the pigmentation of the face. The study of automatic facial recognition addresses issues relating to the static or dynamic qualities of such distortion or facial pigmentation. Use The Camera to capture the live images of autism people


Sign in / Sign up

Export Citation Format

Share Document