The Appliance of Affective Computing in Man-Machine Dialogue: Assisting Therapy of Having Autism

Author(s):  
Lingling Han ◽  
Xiaodong Li
1980 ◽  
Vol 19 (04) ◽  
pp. 187-194
Author(s):  
J.-Ph. Berney ◽  
R. Baud ◽  
J.-R. Scherrer

It is well known that Frame Selection Systems (FFS) have proved both popular and effective in physician-machine and patient-machine dialogue. A formal algorithm for definition of a Frame Selection System for handling man-machine dialogue is presented here. Besides, it is shown how the natural medical language can be handled using the approach of a tree branching logic. This logic appears to be based upon ordered series of selections which enclose a syntactic structure. The external specifications are discussed with regard to convenience and efficiency. Knowing that all communication between the user and the application programmes is handled only by FSS software, FSS contributes to achieving modularity and, therefore, also maintainability in a transaction-oriented system with a large data base and concurrent accesses.


2021 ◽  
Author(s):  
Intissar Khalifa ◽  
Ridha Ejbali ◽  
Raimondo Schettini ◽  
Mourad Zaied

Abstract Affective computing is a key research topic in artificial intelligence which is applied to psychology and machines. It consists of the estimation and measurement of human emotions. A person’s body language is one of the most significant sources of information during job interview, and it reflects a deep psychological state that is often missing from other data sources. In our work, we combine two tasks of pose estimation and emotion classification for emotional body gesture recognition to propose a deep multi-stage architecture that is able to deal with both tasks. Our deep pose decoding method detects and tracks the candidate’s skeleton in a video using a combination of depthwise convolutional network and detection-based method for 2D pose reconstruction. Moreover, we propose a representation technique based on the superposition of skeletons to generate for each video sequence a single image synthesizing the different poses of the subject. We call this image: ‘history pose image’, and it is used as input to the convolutional neural network model based on the Visual Geometry Group architecture. We demonstrate the effectiveness of our method in comparison with other methods in the state of the art on the standard Common Object in Context keypoint dataset and Face and Body gesture video database.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4222
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Masaki Osumi ◽  
Koh Shimokawa

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


2008 ◽  
pp. 293-310 ◽  
Author(s):  
Christian Peter ◽  
Eric Ebert ◽  
Helmut Beikirch

i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 139-151
Author(s):  
Thomas Schmidt ◽  
Miriam Schlindwein ◽  
Katharina Lichtner ◽  
Christian Wolff

AbstractDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


Sign in / Sign up

Export Citation Format

Share Document