scholarly journals APPLICATION OF SUPPORT VECTOR MACHINE BASED SPEECH RECOGNITION TECHNOLOGY IN HUMAN-COMPUTER INTERACTION TECHNOLOGY

2021 ◽  
Vol 11 (3) ◽  
pp. 948-954
Author(s):  
Xiang Chen ◽  
Lijun Xu ◽  
Ming Cao ◽  
Tinghua Zhang ◽  
Zhongan Shang ◽  
...  

At present, the demand for intelligentization of human-computer interaction systems (HCIS) has become increasingly prominent. Being able to recognize the emotions of users of interactive systems is a distinguishing feature of intelligent interactive systems. The intelligent HCIS can analyze the emotional changes of patients with depression, complete the interaction with the patients in a more appropriate manner, and the recognition results can assist family members or medical personnel to make response measures based on the patient’s emotional changes. Based on this background, this paper proposes a sentiment recognition method based on transfer support vector machines (TSVM) and EEG signals. The ER (ER) results based on this method are applied to HCIS. Such a HCIS is mainly used for the interaction of patients with depression. When a new field related to a certain field appears, if the new field data is relabeled, the sample is expensive, and it is very wasteful to discard all the old field data. The main innovation of this research is that the introduced classification model is TSVM. TSVM is a transfer learning strategy based on SVM. Transfer learning aims to solve related but different target domain problems by using a large amount of labeled source domain data. Therefore, the transfer support vector machine based on the transfer mechanism can use the small labeled data of the target domain and a large amount of old data in the related domain to build a high-quality classification model for the target domain, which can effectively improve the accuracy of classification. Comparing the classification results with other classification models, it can be concluded that TSVM can effectively improve the accuracy of ER in patients with depression. The HCIS based on the classification model has higher accuracy and better stability.


2021 ◽  
Author(s):  
Abdel-Gawad A. Abdel-Samei ◽  
Ahmed S.Ali ◽  
Fathi E. Abd El-Samie ◽  
Ayman M.Brisha

Abstract Human-computer interaction (HCI) using Electrooculography (EOG) has been a growing area of research in recent years. The HCI provides communication channels between the human and the external device. Today, EOG is one of the most important biomedical signals for measuring and analyzing the direction of eye movements. The EOG is used to produce both activities in vertical and horizontal directions of human eye movements. In this paper, different human eye movement tasks from vertical and horizontal directions are studied. The dataset of EOG signals were obtained from Electroencephalography (EEG) electrodes from 27 healthy people, 14 males and 13 females. This process resulted from two dipole signals, the vertical-EOG signals and the horizontal-EOG signals. These signals were filtered by band-pass at 0.5–5Hz. A total of 54 datasets from these 27 healthy individuals, each lasting 30 seconds, were given. The Bo-Hjorth parameter was implemented for feature extraction on the preprocessed EOG signals. For classification, Decision Tree (DT), K-Nearest Neighbor (KNN), Ensemble Classifier (EC), Kernel Naive Bayes (KNB) and Support Vector Machine (SVM)) were utilized. The obtained results reveal that the best classifiers on horizontal and vertical signals are the Support Vector Machine (SVM), the Cosine KNN and the Ensemble Subspace Discriminant with having 100% percentage accuracies. Through designing the proposed algorithm for feature extraction, the highest performance of classification can be obtained for rehabilitation purposes and other applications that help the handicapped to take decisions for better life quality, by providing possible human interaction with a computer.


Sign in / Sign up

Export Citation Format

Share Document