scholarly journals Focusing on Mouth Movement to Improve Genuine Smile Recognition

2020 ◽  
Vol 11 ◽  
Author(s):  
Qian-Nan Ruan ◽  
Jing Liang ◽  
Jin-Yu Hong ◽  
Wen-Jing Yan
1996 ◽  
Vol 21 (5) ◽  
pp. 545-551 ◽  
Author(s):  
Marjon J.M. Theunissen ◽  
Jan H.A. Kroeze
Keyword(s):  

1989 ◽  
Vol 61 (4) ◽  
pp. 814-832 ◽  
Author(s):  
O. Hikosaka ◽  
M. Sakamoto ◽  
S. Usui

1. The present paper reports complex neural activities in the monkey caudate nucleus that precede and anticipate visual stimuli and reward in learned visuomotor paradigms. These activities were revealed typically in the delayed saccade task in which memory and anticipation were required. We classified these activities according to their relationships to the task. 2. Activity related to expectation of a cue (n = 46) preceded the presentation of a spot of light (target cue) that signified the future location of saccade target. When the target cue was delayed, the activity was prolonged accordingly. The same spot of light was preceded by no activity if it acted as a distracting stimulus. 3. The sustained activity (n = 80) was a tonic discharge starting after the target cue as if holding the spatial information. 4. The activity related to expectation of target (n = 109) preceded the appearance of the target whose location was cued previously. It started with or after a saccade to the cued target location and ended with the appearance of the target. The activity was greater when the target was expected to appear in the contralateral visual field. 5. The activity related to expectation of reward (n = 57) preceded a task-specific reward. It started with the appearance of the final target and ended with the reward. In most cases, the activity was nonselective for how the monkey obtained the reward, i.e., by visual fixation only, by a saccade, or by a hand movement. The activity was dependent partly on visual fixation. 6. A few neurons showed tonic activity selectively before lever release and are thus considered to be related to the preparation of hand movements. 7. The activity related to breaking fixation (n = 33) occurred phasically if the monkey broke fixation, aborting the trial. 8. Activity related to reward (n = 104) was a phasic discharge that occurred before or after a reward of water was delivered. The activity was not simply related to a specific movement involved in the reward-obtaining behavior (eye, hand, or mouth movement). 9. Fixation-related activity (n = 72) was tonic activity continuing as long as the monkey attentively fixated a spot of light. It was dependent on reward expectancy in most cases. 10. The present results, together with those in the preceding papers, indicate that the activities of individual caudate neurons--sensory, motor, or cognitive--are dependent on specific contexts of learned behavior.(ABSTRACT TRUNCATED AT 400 WORDS)


2020 ◽  
Vol 34 (10) ◽  
pp. 13895-13896 ◽  
Author(s):  
Shimeng Peng ◽  
Lujie Chen ◽  
Chufan Gao ◽  
Richard Jiarui Tong

Engaged learners are effective learners. Even though it is widely recognized that engagement plays a vital role in learning effectiveness, engagement remains to be an elusive psychological construct that is yet to find a consensus definition and reliable measurement. In this study, we attempted to discover the plausible operational definitions of engagement within an online learning context. We achieved this goal by first deriving a set of interpretable features on dynamics of eyes, head and mouth movement from facial landmarks extractions of video recording when students interacting with an online tutoring system. We then assessed their predicative value for engagement which was approximated by synchronized measurements from commercial EEG brainwave headset worn by students. Our preliminary results show that those features reduce root mean-squared error by 29% compared with default predictor and we found that the random forest model performs better than a linear regressor.


2009 ◽  
pp. 388-415 ◽  
Author(s):  
Wai Chee Yau ◽  
Dinesh Kant Kumar ◽  
Hans Weghorn

The performance of a visual speech recognition technique is greatly influenced by the choice of visual speech features. Speech information in the visual domain can be generally categorized into static (mouth appearance) and motion (mouth movement) features. This chapter reviews a number of computer-based lip-reading approaches using motion features. The motion-based visual speech recognition techniques can be broadly categorized into two types of algorithms: optical-flow and image subtraction. Image subtraction techniques have been demonstrated to outperform optical-flow based methods in lip-reading. The problem with image subtraction-based method using difference of frames (DOF) is that these features capture the changes in the images over time, but do not indicate the direction of the mouth movement. New motion features to overcome the limitation of the conventional image subtraction-based techniques in visual speech recognition are presented in this chapter. The proposed approach extracts features by applying motion segmentation on image sequences. Video data are represented in a 2-D space using grayscale images named as motion history images (MHI). MHIs are spatio-temporal templates that implicitly encode the temporal component of mouth movement. Zernike moments are computed from MHIs as image descriptors and classified using support vector machines (SVMs). Experimental results demonstrate that the proposed technique yield a high accuracy in a phoneme classification task. The results suggest that dynamic information is important for visual speech recognition.


Sign in / Sign up

Export Citation Format

Share Document