scholarly journals Algorithms for classification of a single channel EMG signal for human-computer interaction

2018 ◽  
Vol 18 ◽  
pp. 02001 ◽  
Author(s):  
Andrei Lukyanchikov ◽  
Alexei Melnikov ◽  
Oleg Lukyanchikov

One of the most accurate and effective ways to control gestures is to control muscle activity, which occurs with any movement. Electromyography (EMG) is used to record such activity. This article compares SVM classification algorithms, perceptron, random trees and the method of density of probability in relation to the EMG signal. Arduino Leonardo with a single-channel Shield EMG is used to record the signal. The aim of this paper is to prove the possibility of creating a cheap and accessible biointerface based on EMG signal.

Photonics ◽  
2019 ◽  
Vol 6 (3) ◽  
pp. 90 ◽  
Author(s):  
Bosworth ◽  
Russell ◽  
Jacob

Over the past decade, the Human–Computer Interaction (HCI) Lab at Tufts University has been developing real-time, implicit Brain–Computer Interfaces (BCIs) using functional near-infrared spectroscopy (fNIRS). This paper reviews the work of the lab; we explore how we have used fNIRS to develop BCIs that are based on a variety of human states, including cognitive workload, multitasking, musical learning applications, and preference detection. Our work indicates that fNIRS is a robust tool for the classification of brain-states in real-time, which can provide programmers with useful information to develop interfaces that are more intuitive and beneficial for the user than are currently possible given today’s human-input (e.g., mouse and keyboard).


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5963
Author(s):  
Agata Kołakowska ◽  
Agnieszka Landowska

This paper deals with analysis of behavioural patterns in human–computer interaction. In the study, keystroke dynamics were analysed while participants were writing positive and negative opinions. A semi-experiment with 50 participants was performed. The participants were asked to recall the most negative and positive learning experiences (subject and teacher) and write an opinion about it. Keystroke dynamics were captured and over 50 diverse features were calculated and checked against the ability to differentiate positive and negative opinions. Moreover, classification of opinions was performed providing accuracy slightly above the random guess level. The second classification approach used self-report labels of pleasure and arousal and showed more accurate results. The study confirmed that it was possible to recognize positive and negative opinions from the keystroke patterns with accuracy above the random guess; however, combination with other modalities might produce more accurate results.


2021 ◽  
Author(s):  
Céline Jost ◽  
Brigitte Le Pévédic ◽  
Gérard Uzan

This paper aims at discussing the interest to use multisensory technologies for humans cognition training. First it introduces multisensory interactions making a focus on advancement in two fields: Human-Computer Interaction and mulsemedia. Second, it presents two different multisensory systems resulting from Robadom and StimSense projects that could be adapted for the community. Then, this paper defines the concept of scenagram and gives its application scopes, boundaries and use cases, offering a first classification of this new concept.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Junhao Huang ◽  
Zhicheng Zhang ◽  
Guoping Xie ◽  
Hui He

Noncontact human-computer interaction has an important value in wireless sensor networks. This work is aimed at achieving accurate interaction on a computer based on auto eye control, using a cheap webcam as the video source. A real-time accurate human-computer interaction system based on eye state recognition, rough gaze estimation, and tracking is proposed. Firstly, binary classification of the eye states (opening or closed) is carried on using the SVM classification algorithm with HOG features of the input eye image. Second, rough appearance-based gaze estimation is implemented based on a simple CNN model. And the head pose is estimated to judge whether the user is facing the screen or not. Based on these recognition results, noncontact mouse control and character input methods are designed and developed to replace the standard mouse and keyboard hardware. Accuracy and speed of the proposed interaction system are evaluated by four subjects. The experimental results show that users can use only a common monocular camera to achieve gaze estimation and tracking and to achieve most functions of real-time precise human-computer interaction on the basis of auto eye control.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2599
Author(s):  
Leire Francés-Morcillo ◽  
Paz Morer-Camo ◽  
María Isabel Rodríguez-Ferradas ◽  
Aitor Cazón-Martín

Wearable electronics make it possible to monitor human activity and behavior. Most of these devices have not taken into account human factors and they have instead focused on technological issues. This fact could not only affect human–computer interaction and user experience but also the devices’ use cycle. Firstly, this paper presents a classification of wearable design requirements that have been carried out by combining a quantitative and a qualitative methodology. Secondly, we present some evaluation procedures based on design methodologies and human–computer interaction measurement tools. Thus, this contribution aims to provide a roadmap for wearable designers and researchers in order to help them to find more efficient processes by providing a classification of the design requirements and evaluation tools. These resources represent time and resource-saving contributions. Therefore designers and researchers do not have to review the literature. It will no be necessary to carry out exploratory studies for the purposes of identifying requirements or evaluation tools either.


Author(s):  
Zhiwen Yang ◽  
Du Jiang ◽  
Ying Sun ◽  
Bo Tao ◽  
Xiliang Tong ◽  
...  

Gesture recognition technology is widely used in the flexible and precise control of manipulators in the assisted medical field. Our MResLSTM algorithm can effectively perform dynamic gesture recognition. The result of surface EMG signal decoding is applied to the controller, which can improve the fluency of artificial hand control. Much current gesture recognition research using sEMG has focused on static gestures. In addition, the accuracy of recognition depends on the extraction and selection of features. However, Static gesture research cannot meet the requirements of natural human-computer interaction and dexterous control of manipulators. Therefore, a multi-stream residual network (MResLSTM) is proposed for dynamic hand movement recognition. This study aims to improve the accuracy and stability of dynamic gesture recognition. Simultaneously, it can also advance the research on the smooth control of the Manipulator. We combine the residual model and the convolutional short-term memory model into a unified framework. The architecture extracts spatiotemporal features from two aspects: global and deep, and combines feature fusion to retain essential information. The strategy of pointwise group convolution and channel shuffle is used to reduce the number of network calculations. A dataset is constructed containing six dynamic gestures for model training. The experimental results show that on the same recognition model, the gesture recognition effect of fusion of sEMG signal and acceleration signal is better than that of only using sEMG signal. The proposed approach obtains competitive performance on our dataset with the recognition accuracies of 93.52%, achieving state-of-the-art performance with 89.65% precision on the Ninapro DB1 dataset. Our bionic calculation method is applied to the controller, which can realize the continuity of human-computer interaction and the flexibility of manipulator control.


Sign in / Sign up

Export Citation Format

Share Document