facial electromyography
Recently Published Documents


TOTAL DOCUMENTS

154
(FIVE YEARS 57)

H-INDEX

19
(FIVE YEARS 4)

Author(s):  
Huihui Cai ◽  
Yakun Zhang ◽  
Liang Xie ◽  
Huijiong Yan ◽  
Wei Qin ◽  
...  

Nutrients ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 4216
Author(s):  
Wataru Sato ◽  
Akira Ikegami ◽  
Sayaka Ishihara ◽  
Makoto Nakauma ◽  
Takahiro Funami ◽  
...  

Sensing subjective hedonic or emotional experiences during eating using physiological activity is practically and theoretically important. A recent psychophysiological study has reported that facial electromyography (EMG) measured from the corrugator supercilii muscles was negatively associated with hedonic ratings, including liking, wanting, and valence, during the consumption of solid foods. However, the study protocol prevented participants from natural mastication (crushing of food between the teeth) during physiological data acquisition, which could hide associations between hedonic experiences and masticatory muscle activity during natural eating. We investigated this issue by assessing participants’ subjective ratings (liking, wanting, valence, and arousal) and recording physiological measures, including EMG of the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles while they consumed gel-type solid foods (water-based gellan gum jellies) of diverse flavors. Ratings of liking, wanting, and valence were negatively correlated with corrugator supercilii EMG and positively correlated with masseter and suprahyoid EMG. These findings imply that subjective hedonic experiences during food consumption can be sensed using EMG signals from the brow and masticatory muscles.


2021 ◽  
Author(s):  
Zhenyu Jin

Object. Electroencephalography (EEG) signals suffer from a low signal-to-noise ratio and are very susceptible to muscular, ambient noise, and other artifacts. Many artifact removal algorithms have been proposed to address this problem. However, the evaluation of these algorithms is conventionally too indirect (e.g., black-box comparisons of brain-computer interface performance before and after removal) because it is unclear which part of the signal represents raw EEG and which is noise. This project objectively benchmarks popular artifact removal algorithms and evaluates the fundamental Independent Component Analysis (ICA) approach thanks to a unique dataset where EEG is recorded simultaneously with other physiological signals-facial electromyography (EMG), accelerometers, and gyroscope-while ten subjects perform several repetitions of common artifact-inflicting tasks (blinking, speaking, etc.). Approach. I have compared the correlation between EEG signals and the artifact-representing channels before and after applying an artifact removal algorithm across the different artifact-inflicting tasks. The extent to which an artifact removal method can reduce this correlation objectively quantifies its effectiveness for the different artifacts. In the same direction, I have determined to what extent ICA successfully detects artefactual components in EEG by comparing the corresponding correlations for independent components that are labeled as artifacts with those labeled as EEG. Main result. The FORCe was found to be the most effective and generic artifact removal method, cleaning almost 40% of artifacts. ICA is shown to be able to isolate almost 70% of artefactual components. Significance. This work alleviates the problem of unreliable evaluation of EEG artifact removal frameworks and provides the first reliable benchmark for the most popular algorithms in this literature.


Author(s):  
Diana Kayser ◽  
Hauke Egermann ◽  
Nick E. Barraclough

AbstractAn abundance of studies on emotional experiences in response to music have been published over the past decades, however, most have been carried out in controlled laboratory settings and rely on subjective reports. Facial expressions have been occasionally assessed but measured using intrusive methods such as facial electromyography (fEMG). The present study investigated emotional experiences of fifty participants in a live concert. Our aims were to explore whether automated face analysis could detect facial expressions of emotion in a group of people in an ecologically valid listening context, to determine whether emotions expressed by the music predicted specific facial expressions and examine whether facial expressions of emotion could be used to predict subjective ratings of pleasantness and activation. During the concert, participants were filmed and facial expressions were subsequently analyzed with automated face analysis software. Self-report on participants’ subjective experience of pleasantness and activation were collected after the concert for all pieces (two happy, two sad). Our results show that the pieces that expressed sadness resulted in more facial expressions of sadness (compared to happiness), whereas the pieces that expressed happiness resulted in more facial expressions of happiness (compared to sadness). Differences for other facial expression categories (anger, fear, surprise, disgust, and neutral) were not found. Independent of the musical piece or emotion expressed in the music facial expressions of happiness predicted ratings of subjectively felt pleasantness, whilst facial expressions of sadness and disgust predicted low and high ratings of subjectively felt activation, respectively. Together, our results show that non-invasive measurements of audience facial expressions in a naturalistic concert setting are indicative of emotions expressed by the music, and the subjective experiences of the audience members themselves.


2021 ◽  
Author(s):  
Tomoki Ishikura ◽  
Yuki Kitamura ◽  
Wataru Sato ◽  
Jun Takamatsu ◽  
Akishige Yuguchi ◽  
...  

Abstract Pleasant touching is an important aspect of social interactions that is widely used as a caregiving technique. To address problems resulting from a lack of available human caregivers, previous research has attempted to develop robots that can perform this kind of pleasant touch. However, it remains unclear whether robots can provide such a pleasant touch in a manner similar to humans. To investigate this issue, we compared the effect of the speed of gentle strokes on the back between human and robot agents on the emotional responses of human participants (n = 28). A robot or a human stroked on the participants’ back at slow and medium speeds (i.e., 2.6 and 8.5 cm/s). Participants’ subjective (valence and arousal ratings) and physiological (facial electromyography (EMG) recorded from the corrugator supercilii and zygomatic major muscles, and skin conductance response) emotional reactions were measured. The subjective ratings demonstrated that the medium speed was more pleasant and arousing than the slow speed for both human and robot strokes. The corrugator supercilii EMG showed that the medium speed resulted in reduced activity in response to both human and robot strokes. These results demonstrate similar speed-dependent modulations of stroke on subjective and physiological positive emotional responses across human and robot agents and suggest that robots can provide a pleasant touch similar to that of humans.


2021 ◽  
Author(s):  
Yael Hanein

Facial-expressions play a major role in human communication and provide a window to an individual’s emotional state. While facial expressions can be consciously manipulated to conceal true emotions, very brief leaked expressions may occur, exposing one’s true internal state. Leaked expressions are therefore considered as an important hallmark in deception detection, a field with enormous social and economic impact. Challengingly, capturing these subtle and brief expressions has been so far limited to visual examination (manual or machine-based), with almost no electromyography evidence. In this investigation we set to explore whether electromyography of leaked expressions can be faithfully recorded with specially designed wearable electrodes. Indeed, using soft multi-electrode array based facial electromyography, we were able to record localized and brief signals in individuals instructed to suppress smiles. The electromyography evidence was validated with high-speed video recordings. The recording approach reported here provides a new and sensitive tool for leaked expression investigations and a basis for improved automated systems.


2021 ◽  
Vol 15 ◽  
Author(s):  
Bo Zhu ◽  
Daohui Zhang ◽  
Yaqi Chu ◽  
Xingang Zhao ◽  
Lixin Zhang ◽  
...  

Patients who have lost limb control ability, such as upper limb amputation and high paraplegia, are usually unable to take care of themselves. Establishing a natural, stable, and comfortable human-computer interface (HCI) for controlling rehabilitation assistance robots and other controllable equipments will solve a lot of their troubles. In this study, a complete limbs-free face-computer interface (FCI) framework based on facial electromyography (fEMG) including offline analysis and online control of mechanical equipments was proposed. Six facial movements related to eyebrows, eyes, and mouth were used in this FCI. In the offline stage, 12 models, eight types of features, and three different feature combination methods for model inputing were studied and compared in detail. In the online stage, four well-designed sessions were introduced to control a robotic arm to complete drinking water task in three ways (by touch screen, by fEMG with and without audio feedback) for verification and performance comparison of proposed FCI framework. Three features and one model with an average offline recognition accuracy of 95.3%, a maximum of 98.8%, and a minimum of 91.4% were selected for use in online scenarios. In contrast, the way with audio feedback performed better than that without audio feedback. All subjects completed the drinking task in a few minutes with FCI. The average and smallest time difference between touch screen and fEMG under audio feedback were only 1.24 and 0.37 min, respectively.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253509
Author(s):  
Irene Jaén ◽  
Amanda Díaz-García ◽  
M. Carmen Pastor ◽  
Azucena García-Palacios

Cognitive reappraisal and acceptance strategies have been shown to be effective in reducing pain experience and increasing pain tolerance. However, no systematic reviews have focused on the relationship between the use of these two strategies and peripheral physiological correlates when pain is experimentally induced. This systematic review aims to summarize the existing literature that explores the relationship between emotion regulation strategies (i.e., cognitive reappraisal and acceptance) and peripheral correlates of the autonomic nervous system and facial electromyography, such as affect-modulated responses and corrugator activity, on laboratory tasks where pain is induced. The systematic review identifies nine experimental studies that meet our inclusion criteria, none of which compare these strategies. Although cognitive reappraisal and acceptance strategies appear to be associated with decreased psychological responses, mixed results were found for the effects of the use of both strategies on all the physiological correlates. These inconsistencies between the studies might be explained by the high methodological heterogeneity in the task designs, as well as a lack of consistency between the instructions used in the different studies for cognitive reappraisal, acceptance, and the control conditions.


Author(s):  
Jayendhra Shiva ◽  
Navaneethakrishna Makaram ◽  
P.A. Karthick ◽  
Ramakrishnan Swaminathan

Recognition of the emotions demonstrated by human beings plays a crucial role in healthcare and human-machine interface. This paper reports an attempt to classify emotions using a spectral feature from facial electromyography (facial EMG) signals in the valence affective dimension. For this purpose, the facial EMG signals are obtained from the DEAP dataset. The signals are subjected to Short-Time Fourier Transform, and the peak frequency values are extracted from the signal in intervals of one second. Support vector machine (SVM) classifier is used for the classification of the features extracted. The extracted feature can classify the signals in the valence dimension with an accuracy of 61.37%. The proposed feature could be used as an added feature for emotion recognition, and this method of analysis could be extended to myoelectric control applications.


Sign in / Sign up

Export Citation Format

Share Document