facial gestures
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 11)

H-INDEX

14
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Holly Rayson ◽  
Alice Massera ◽  
Mauro Belluardo ◽  
Suliann Ben Hamed ◽  
Pier Francesco Ferrari

AbstractAffect-biased attention may play a fundamental role in early socioemotional development, but factors influencing its emergence and associations with typical versus pathological outcomes remain unclear. Here, we adopted a nonhuman primate model of early social adversity (ESA) to: (1) establish whether juvenile, pre-adolescent macaques demonstrate attention biases to both threatening and reward-related dynamic facial gestures; (2) examine the effects of early social experience on such biases; and (3) investigate how this relation may be linked to socioemotional behaviour. Two groups of juvenile macaques (ESA exposed and non-ESA exposed) were presented with pairs of dynamic facial gestures comprising two conditions: neutral-threat and neutral-lipsmacking. Attention biases to threat and lipsmacking were calculated as the proportion of gaze to the affective versus neutral gesture. Measures of anxiety and social engagement were also acquired from videos of the subjects in their everyday social environment. Results revealed that while both groups demonstrated an attention bias towards threatening facial gestures, a greater bias linked to anxiety was demonstrated by the ESA group only. Only the non-ESA group demonstrated a significant attention bias towards lipsmacking, and the degree of this positive bias was related to duration and frequency of social engagement in this group. These findings offer important insights into the effects of early social experience on affect-biased attention and related socioemotional behaviour in nonhuman primates, and demonstrate the utility of this model for future investigations into the neural and learning mechanisms underlying this relationship across development.


Author(s):  
Aryansh Shrivastava

The goal of my project is to create a computer vision and AI based system that can interpret the face gestures in an intelligent and meaningful way, helping people with disabilities to do two way written and verbal communication. This system should also use face gestures to control the precise navigation of wheelchairs and other guided robotic devices, which will help their movement. Also, this system should be able to interpret and convert facial gestures into commands which can control home and office gadgets that will help them control the environment around them, such as lighting (on/off), temperature, and sound.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e12237
Author(s):  
Brittany Florkiewicz ◽  
Matthew Campbell

Great ape manual gestures are described as communicative, flexible, intentional, and goal-oriented. These gestures are thought to be an evolutionary pre-cursor to human language. Conversely, facial expressions are thought to be inflexible, automatic, and derived from emotion. However, great apes can make a wide range of movements with their faces, and they may possess the control needed to gesture with their faces as well as their hands. We examined whether chimpanzee facial expressions possess the four important gesture properties and how they compare to manual gestures. To do this, we quantified variables that have been previously described through largely qualitative means. Chimpanzee facial expressions met all four gesture criteria and performed remarkably similar to manual gestures. Facial gestures have implications for the evolution of language. If other mammals also show facial gestures, then the gestural origins of language may be much older than the human/great ape lineage.


Kinesic Humor ◽  
2021 ◽  
pp. 93-105
Author(s):  
Guillemette Bolens

Stendhal was deeply interested in comedy. Even in such a tragic novel as Le Rouge et le Noir, powerful emotions are interwoven with humorous effects. A remarkable passage of Le Rouge et le Noir stages Julien Sorel interacting with Amanda Binet, a barmaid, and one of her lovers. Kinesic humor is central to this scene in which the complexity of kinesic communication is thematized through Julien’s failure to emulate the swaggering gait, dynamic facial gestures, and kinesic know-how of Amanda’s lover. Stendhal’s style is carefully considered in this chapter in relation to the challenge it represents for translators.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7206
Author(s):  
Jinhyuk Kim ◽  
Jaekwang Cha ◽  
Shiho Kim

The typical configuration of virtual reality (VR) devices consists of a head-mounted display (HMD) and handheld controllers. As such, these units have limited utility in tasks that require hand-free operation, such as in surgical operations or assembly works in cyberspace. We propose a user interface for a VR headset based on a wearer’s facial gestures for hands-free interaction, similar to a touch interface. By sensing and recognizing the expressions associated with the in situ intentional movements of a user’s facial muscles, we define a set of commands that combine predefined facial gestures with head movements. This is achieved by utilizing six pairs of infrared (IR) photocouplers positioned at the foam interface of an HMD. We demonstrate the usability and report on the user experience as well as the performance of the proposed command set using an experimental VR game without any additional controllers. We obtained more than 99% of recognition accuracy for each facial gesture throughout the three steps of experimental tests. The proposed input interface is a cost-effective and efficient solution that facilitates hands-free user operation of a VR headset using built-in infrared photocouplers positioned in the foam interface. The proposed system recognizes facial gestures and incorporates a hands-free user interface to HMD, which is similar to the touch-screen experience of a smartphone.


Author(s):  
Katsutoshi Masai ◽  
Kai Kunze ◽  
Daisuke Sakamoto ◽  
Yuta Sugiura ◽  
Maki Sugimoto

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4441
Author(s):  
Jaekwang Cha ◽  
Jinhyuk Kim ◽  
Shiho Kim

Developing a user interface (UI) suitable for headset environments is one of the challenges in the field of augmented reality (AR) technologies. This study proposes a hands-free UI for an AR headset that exploits facial gestures of the wearer to recognize user intentions. The facial gestures of the headset wearer are detected by a custom-designed sensor that detects skin deformation based on infrared diffusion characteristics of human skin. We designed a deep neural network classifier to determine the user’s intended gestures from skin-deformation data, which are exploited as user input commands for the proposed UI system. The proposed classifier is composed of a spatiotemporal autoencoder and deep embedded clustering algorithm, trained in an unsupervised manner. The UI device was embedded in a commercial AR headset, and several experiments were performed on the online sensor data to verify operation of the device. We achieved implementation of a hands-free UI for an AR headset with average accuracy of 95.4% user-command recognition, as determined through tests by participants.


Sign in / Sign up

Export Citation Format

Share Document