Flexible Bi-modal Control Modes for Hands-Free Operation of a Wheelchair by Head Movements and Facial Expressions

Author(s):  
Ericka Janet Rechy-Ramirez ◽  
Huosheng Hu
2014 ◽  
Vol 4 (1) ◽  
pp. 59-76 ◽  
Author(s):  
Ericka Janet Rechy-Ramirez ◽  
Huosheng Hu

This paper presents a bio-signal based human machine interface (HMI) for hands-free control of an electric powered wheelchair. In this novel HMI, an Emotive EPOC sensor is deployed to detect facial expressions and head movements of users, which are then recognized and converted to four uni-modal control modes and two bi-modal control modes to operate the wheelchair. Nine facial expressions and up-down head movements have been defined and tested, so that users can select some of these facial expressions and head movements to form the six control commands. The proposed HMI is user-friendly and allows users to select one of available control modes according to their comfort. Experiments are conducted to show the feasibility and performance of the proposed HMI.


Author(s):  
Yongmian Zhang ◽  
Jixu Chen ◽  
Yan Tong ◽  
Qiang Ji

This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.


2011 ◽  
pp. 1637-1654
Author(s):  
Hirohiko Sagawa ◽  
Masaru Takeuchi

We have developed a sign language teaching system that uses sign language recognition and generation methods to overcome three problems with current learning materials: a lack of information about non-manual gestures (facial expressions, glances, head movements, etc.), display of gestures from only one or two points of view, and a lack of feedback about the correctness of the learner’s gestures. Experimental evaluation by 24 non-hearing-impaired people demonstrated that the system is effective for learning sign language.


This project presents a system to automatically detect emotional dichotomy and mixed emotional experience using a Linux based system. Facial expressions, head movements and facial gestures were captured from pictorial input in order to create attributes such as distance, coordinates and movement of tracked points. Web camera is used to extract spectral attributes. Features are calculated using Fisherface algorithm. Emotion detected by cascade classifier and feature level fusion was used to create a combined feature vector. Live actions of user are to be used for recording emotions. As per calculated result system will play songs and display books list.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3263 ◽  
Author(s):  
Lee ◽  
Cho ◽  
Lee ◽  
Whang

Heart rate has been measured comfortably using a camera without the skin-contact by the development of vision-based measurement. Despite the potential of the vision-based measurement, it has still presented limited ability due to the noise of illumination variance and motion artifacts. Remote ballistocardiography (BCG) was used to estimate heart rate from the ballistocardiographic head movements generated by the flow of blood through the carotid arteries. It was robust to illumination variance but still limited in the motion artifacts such as facial expressions and voluntary head motions. Recent studies on remote BCG focus on the improvement of signal extraction by minimizing the motion artifacts. They simply estimated the heart rate from the cardiac signal using peak detection and fast fourier transform (FFT). However, the heart rate estimation based on peak detection and FFT depend on the robust signal estimation. Thus, if the cardiac signal is contaminated with some noise, the heart rate cannot be estimated accurately. This study aimed to develop a novel method to improve heart rate estimation from ballistocardiographic head movements using the unsupervised clustering. First, the ballistocardiographic head movements were measured from facial video by detecting facial points using the good-feature-to-track (GFTT) algorithm and by tracking using the Kanade–Lucas–Tomasi (KLT) tracker. Second, the cardiac signal was extracted from the ballistocardiographic head movements by bandpass filter and principal component analysis (PCA). The relative power density (RPD) was extracted from its power spectrum between 0.75 Hz and 2.5 Hz. Third, the unsupervised clustering was performed to construct a model to estimate the heart rate from the RPD using the dataset consisting of the RPD and the heart rate measured from electrocardiogram (ECG). Finally, the heart rate was estimated from the RPD using the model. The proposed method was verified by comparing it with previous methods using the peak detection and the FFT. As a result, the proposed method estimated a more accurate heart rate than previous methods in three experiments by levels of the motion artifacts consisting of facial expressions and voluntary head motions. The four main contributions are as follows: (1) the unsupervised clustering improved the heart rate estimation by overcoming the motion artifacts (i.e., facial expressions and voluntary head motions); (2) the proposed method was verified by comparing with the previous methods using the peak detection and the FFT; (3) the proposed method can be combined with existing vision-based measurement and can improve their performance; (4) the proposed method was tested by three experiments considering the realistic environment including the motion artifacts, thus, it increases the possibility of the non-contact measurement in daily life.


Author(s):  
Ericka Janet Rechy-Ramirez ◽  
Huosheng Hu

A bio-signal-based human machine interface is proposed for hands-free control of a wheelchair. An Emotiv EPOC sensor is used to detect facial expressions and head movements of users. Nine facial expressions and up-down head movements can be chosen to form five commands: move-forward and backward, turn-left and right, and stop. Four uni-modal modes, three bi-modal modes, and three fuzzy bi-modal modes are created to control a wheelchair. Fuzzy modes use the users' strength in making the head movement and facial expression to adjust the wheelchair speed via a fuzzy logic system. Two subjects tested the ten modes with several command configurations. Means, minimum, and maximum values of the traveling times achieved by each subject in each mode were collected. Results showed that both subjects achieved the lowest mean, minimum and maximum traveling times using fuzzy modes. Statistical tests showed that there were significant differences between traveling times of fuzzy modes of subject B and traveling times of bi-modal modes and those of the respective fuzzy modes of both subjects.


Sign in / Sign up

Export Citation Format

Share Document