facial movements
Recently Published Documents


TOTAL DOCUMENTS

268
(FIVE YEARS 89)

H-INDEX

26
(FIVE YEARS 3)

Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 121
Author(s):  
Hanna Rüschenschmidt ◽  
Gerd Fabian Volk ◽  
Christoph Anders ◽  
Orlando Guntinas-Lichius

There are currently no data on the electromyography (EMG) of all intrinsic and extrinsic ear muscles. The aim of this work was to develop a standardized protocol for a reliable surface EMG examination of all nine ear muscles in twelve healthy participants. The protocol was then applied in seven patients with unilateral postparalytic facial synkinesis. Based on anatomic preparations of all ear muscles on two cadavers, hot spots for the needle EMG of each individual muscle were defined. Needle and surface EMG were performed in one healthy participant; facial movements could be defined for the reliable activation of individual ear muscles’ surface EMG. In healthy participants, most tasks led to the activation of several ear muscles without any side difference. The greatest EMG activity was seen when smiling. Ipsilateral and contralateral gaze were the only movements resulting in very distinct activation of the transversus auriculae and obliquus auriculae muscles. In patients with facial synkinesis, ear muscles’ EMG activation was stronger on the postparalytic compared to the contralateral side for most tasks. Additionally, synkinetic activation was verifiable in the ear muscles. The surface EMG of all ear muscles is reliably feasible during distinct facial tasks, and ear muscle EMG enriches facial electrodiagnostics.


Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 7
Author(s):  
Penghua Zhu ◽  
Jie Zhu ◽  
Xiaofei Xue ◽  
Yongtao Song

Recently, the stretchable piezoresistive composites have become a focus in the fields of the biomechanical sensing and human posture recognition because they can be directly and conformally attached to bodies and clothes. Here, we present a stretchable piezoresistive thread sensor (SPTS) based on Ag plated glass microspheres (Ag@GMs)/solid rubber (SR) composite, which was prepared using new shear dispersion and extrusion vulcanization technology. The SPTS has the high gauge factors (7.8~11.1) over a large stretching range (0–50%) and approximate linear curves about the relative change of resistance versus the applied strain. Meanwhile, the SPTS demonstrates that the hysteresis is as low as 2.6% and has great stability during 1000 stretching/releasing cycles at 50% strain. Considering the excellent mechanical strain-driven characteristic, the SPTS was carried out to monitor posture recognitions and facial movements. Moreover, the novel SPTS can be successfully integrated with software and hardware information modules to realize an intelligent gesture recognition system, which can promptly and accurately reflect the produced electrical signals about digital gestures, and successfully be translated into text and voice. This work demonstrates great progress in stretchable piezoresistive sensors and provides a new strategy for achieving a real-time and effective-communication intelligent gesture recognition system.


2021 ◽  
pp. 014556132110546
Author(s):  
Tom Shokri ◽  
Shivam Patel ◽  
Kasra Ziai ◽  
Jonathan Harounian ◽  
Jessyka G Lighthall

Introduction Synkinesis refers to abnormal involuntary facial movements that accompany volitional facial movements. Despite a 55% incidence of synkinesis reported in patients with enduring facial paralysis, there is still a lack of complete understanding of this debilitating condition, leading to functional limitations and decreased quality of life. 1 This article reviews the diagnostic assessment, etiology, pathophysiology, rehabilitation, and nonsurgical and surgical treatments for facial synkinesis. Methods A PubMed and Cochrane search was done with no date restrictions for English-language literature on facial synkinesis. The search terms used were “facial,” “synkinesis,” “palsy,” and various combinations of the terms. Results The resultant inability to control the full extent of one’s facial movements has functional and psychosocial consequences and may result in social withdrawal with a significant decrease in quality of life. An understanding of facial mimetic musculature is imperative in guiding appropriate intervention. While chemodenervation with botulinum toxin and neurorehabilitation have continued to be the primary treatment strategy for facial synkinesis, novel techniques such as selective myectomy, selective neurolysis, free-functioning muscle transfer, and nerve grafting techniques are becoming increasingly utilized in treatment regimens. Facial rehabilitation, including neuromuscular retraining, soft tissue massage, and relaxation therapy in addition to chemodenervation with botulinum toxin, remains the cornerstone of treatment. In cases of severe, intractable synkinesis and non-flaccid facial paralysis, surgical interventions, including selective neurectomy, selective myectomy, nerve grafting, or free muscle transfer, may play a more significant role in alleviating symptoms. Discussion A multidisciplinary approach involving therapists, clinicians, and surgeons is necessary to develop a comprehensive treatment regimen that will result in optimal outcomes. Ultimately, therapy should be tailored to the severity and pattern of synkinesis, and each patient approached on a case-by-case basis. A multidisciplinary approach involving therapists, clinicians, and surgeons is necessary to develop a comprehensive treatment regimen that will result in optimal outcomes.


Perception ◽  
2021 ◽  
pp. 030100662110559
Author(s):  
Myron Tsikandilakis ◽  
Zhaoliang Yu ◽  
Leonie Kausel ◽  
Gonzalo Boncompte ◽  
Renzo C. Lanfranco ◽  
...  

The theory of universal emotions suggests that certain emotions such as fear, anger, disgust, sadness, surprise and happiness can be encountered cross-culturally. These emotions are expressed using specific facial movements that enable human communication. More recently, theoretical and empirical models have been used to propose that universal emotions could be expressed via discretely different facial movements in different cultures due to the non-convergent social evolution that takes place in different geographical areas. This has prompted the consideration that own-culture emotional faces have distinct evolutionary important sociobiological value and can be processed automatically, and without conscious awareness. In this paper, we tested this hypothesis using backward masking. We showed, in two different experiments per country of origin, to participants in Britain, Chile, New Zealand and Singapore, backward masked own and other-culture emotional faces. We assessed detection and recognition performance, and self-reports for emotionality and familiarity. We presented thorough cross-cultural experimental evidence that when using Bayesian assessment of non-parametric receiver operating characteristics and hit-versus-miss detection and recognition response analyses, masked faces showing own cultural dialects of emotion were rated higher for emotionality and familiarity compared to other-culture emotional faces and that this effect involved conscious awareness.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Bruno Laeng ◽  
Sarjo Kuyateh ◽  
Tejaswinee Kelkar

AbstractCross-modal integration is ubiquitous within perception and, in humans, the McGurk effect demonstrates that seeing a person articulating speech can change what we hear into a new auditory percept. It remains unclear whether cross-modal integration of sight and sound generalizes to other visible vocal articulations like those made by singers. We surmise that perceptual integrative effects should involve music deeply, since there is ample indeterminacy and variability in its auditory signals. We show that switching videos of sung musical intervals changes systematically the estimated distance between two notes of a musical interval so that pairing the video of a smaller sung interval to a relatively larger auditory led to compression effects on rated intervals, whereas the reverse led to a stretching effect. In addition, after seeing a visually switched video of an equally-tempered sung interval and then hearing the same interval played on the piano, the two intervals were judged often different though they differed only in instrument. These findings reveal spontaneous, cross-modal, integration of vocal sounds and clearly indicate that strong integration of sound and sight can occur beyond the articulations of natural speech.


Author(s):  
Alexander Mielke ◽  
Bridget M. Waller ◽  
Claire Pérez ◽  
Alan V. Rincon ◽  
Julie Duboscq ◽  
...  

AbstractUnderstanding facial signals in humans and other species is crucial for understanding the evolution, complexity, and function of the face as a communication tool. The Facial Action Coding System (FACS) enables researchers to measure facial movements accurately, but we currently lack tools to reliably analyse data and efficiently communicate results. Network analysis can provide a way to use the information encoded in FACS datasets: by treating individual AUs (the smallest units of facial movements) as nodes in a network and their co-occurrence as connections, we can analyse and visualise differences in the use of combinations of AUs in different conditions. Here, we present ‘NetFACS’, a statistical package that uses occurrence probabilities and resampling methods to answer questions about the use of AUs, AU combinations, and the facial communication system as a whole in humans and non-human animals. Using highly stereotyped facial signals as an example, we illustrate some of the current functionalities of NetFACS. We show that very few AUs are specific to certain stereotypical contexts; that AUs are not used independently from each other; that graph-level properties of stereotypical signals differ; and that clusters of AUs allow us to reconstruct facial signals, even when blind to the underlying conditions. The flexibility and widespread use of network analysis allows us to move away from studying facial signals as stereotyped expressions, and towards a dynamic and differentiated approach to facial communication.


Author(s):  
Kayley Birch-Hurst ◽  
Magdalena Rychlowska ◽  
Michael B. Lewis ◽  
Ross E. Vanderwert

AbstractPeople tend to automatically imitate others’ facial expressions of emotion. That reaction, termed “facial mimicry” has been linked to sensorimotor simulation—a process in which the observer’s brain recreates and mirrors the emotional experience of the other person, potentially enabling empathy and deep, motivated processing of social signals. However, the neural mechanisms that underlie sensorimotor simulation remain unclear. This study tests how interfering with facial mimicry by asking participants to hold a pen in their mouth influences the activity of the human mirror neuron system, indexed by the desynchronization of the EEG mu rhythm. This response arises from sensorimotor brain areas during observed and executed movements and has been linked with empathy. We recorded EEG during passive viewing of dynamic facial expressions of anger, fear, and happiness, as well as nonbiological moving objects. We examine mu desynchronization under conditions of free versus altered facial mimicry and show that desynchronization is present when adult participants can freely move but not when their facial movements are inhibited. Our findings highlight the importance of motor activity and facial expression in emotion communication. They also have important implications for behaviors that involve occupying or hiding the lower part of the face.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258322
Author(s):  
Mareike Brych ◽  
Supriya Murali ◽  
Barbara Händel

The blink rate increases if a person indulges in a conversation compared to quiet rest. Since various factors were suggested to explain this increase, the present series of studies tested the influence of different motor activities, cognitive processes and auditory input on the blink behavior but at the same time minimized visual stimulation as well as social influences. Our results suggest that neither cognitive demands without verbalization, nor isolated lip, jaw or tongue movements, nor auditory input during vocalization or listening influence our blinking behavior. In three experiments, we provide evidence that complex facial movements during unvoiced speaking are the driving factors that increase blinking. If the complexity of the motor output increased such as during the verbalization of speech, the blink rate rose even more. Similarly, complex facial movements without cognitive demands, such as sucking on a lollipop, increased the blink rate. Such purely motor-related influences on blinking advise caution particularly when using blink rates assessed during patient interviews as a neurological indicator.


Sign in / Sign up

Export Citation Format

Share Document