scholarly journals Inkjet-printed fully customizable and low-cost electrodes matrix for gesture recognition

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Giulio Rosati ◽  
Giulia Cisotto ◽  
Daniele Sili ◽  
Luca Compagnucci ◽  
Chiara De Giorgi ◽  
...  

AbstractThe use of surface electromyography (sEMG) is rapidly spreading, from robotic prostheses and muscle computer interfaces to rehabilitation devices controlled by residual muscular activities. In this context, sEMG-based gesture recognition plays an enabling role in controlling prosthetics and devices in real-life settings. Our work aimed at developing a low-cost, print-and-play platform to acquire and analyse sEMG signals that can be arranged in a fully customized way, depending on the application and the users’ needs. We produced 8-channel sEMG matrices to measure the muscular activity of the forearm using innovative nanoparticle-based inks to print the sensors embedded into each matrix using a commercial inkjet printer. Then, we acquired the multi-channel sEMG data from 12 participants while repeatedly performing twelve standard finger movements (six extensions and six flexions). Our results showed that inkjet printing-based sEMG signals ensured significant similarity values across repetitions in every participant, a large enough difference between movements (dissimilarity index above 0.2), and an overall classification accuracy of 93–95% for flexion and extension, respectively.

2021 ◽  
Author(s):  
Giulio Rosati ◽  
Giulia Cisotto ◽  
Daniele Sili ◽  
Luca Compagnucci ◽  
Chiara De Giorgi ◽  
...  

Abstract The use of surface electromyography (sEMG) is rapidly spreading, from robotic prostheses and, muscle computer interfaces, to rehabilitation devices controlled by residual muscular activities. In this context, sEMG-based gesture recognition plays an enabling role to control prosthetics and devices in real-life settings. The aim of our work was to develop a low-cost, print-and-play platform to acquire and analyse sEMG signals that can be arranged in a fully customized way, depending on the application and the users’ needs. We produced 8-channel sEMG matrices to measure muscular activity of the forearm using innovative nanoparticle-based inks to print the sensors embedded into each matrix using a commercial inkjet printer. Then, we acquired the multi-channel sEMG data from 12 participants, while they were repeatedly performing 12 standard finger movements (6 extension and 6 flexion). Our results showed that inkjet printing-based sEMG signals ensured significant similarity values across repetitions in every participant, a large enough difference between movements (dissimilarity index above 0.2), and an overall classification accuracy of 93%-95% for flexion and extension, respectively.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 672 ◽  
Author(s):  
Lin Chen ◽  
Jianting Fu ◽  
Yuheng Wu ◽  
Haochen Li ◽  
Bin Zheng

By training the deep neural network model, the hidden features in Surface Electromyography(sEMG) signals can be extracted. The motion intention of the human can be predicted by analysis of sEMG. However, the models recently proposed by researchers often have a large number of parameters. Therefore, we designed a compact Convolution Neural Network (CNN) model, which not only improves the classification accuracy but also reduces the number of parameters in the model. Our proposed model was validated on the Ninapro DB5 Dataset and the Myo Dataset. The classification accuracy of gesture recognition achieved good results.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Le Cao ◽  
Wenyan Zhang ◽  
Xiu Kan ◽  
Wei Yao

In the field of noncontact human-computer interaction, it is of crucial importance to distinguish different surface electromyography (sEMG) gestures accurately for intelligent prosthetic control. Gesture recognition based on low sampling frequency sEMG signal can extend the application of wearable low-cost EMG sensor (for example, MYO bracelet) in motion control. In this paper, a combination of sEMG gesture recognition consisting of feature extraction, genetic algorithm (GA), and support vector machine (SVM) model is proposed. Particularly, a novel adaptive mutation particle swarm optimization (AMPSO) algorithm is proposed to optimize the parameters of SVM; moreover, a new calculation method of mutation probability is also defined. The AMPSO-SVM model based on combination processing is successfully applied to MYO bracelet dataset, and four gesture classifications are carried out. Furthermore, AMPSO-SVM is compared with PSO-SVM, GS-SVM, and BP. The sEMG gesture recognition rate of AMPSO-SVM is 0.975, PSO-SVM is 0.9463, GS-SVM is 0.9093, and BP is 0.9019. The experimental results show that AMPSO-SVM is effective for low-frequency sEMG signals of different gestures.


Author(s):  
Yu Du ◽  
Yongkang Wong ◽  
Wenguang Jin ◽  
Wentao Wei ◽  
Yu Hu ◽  
...  

Conventionally, gesture recognition based on non-intrusive muscle-computer interfaces required a strongly-supervised learning algorithm and a large amount of labeled training signals of surface electromyography (sEMG). In this work, we show that temporal relationship of sEMG signals and data glove provides implicit supervisory signal for learning the gesture recognition model. To demonstrate this, we present a semi-supervised learning framework with a novel Siamese architecture for sEMG-based gesture recognition. Specifically, we employ auxiliary tasks to learn visual representation; predicting the temporal order of two consecutive sEMG frames; and, optionally, predicting the statistics of 3D hand pose with a sEMG frame. Experiments on the NinaPro, CapgMyo and csl-hdemg datasets validate the efficacy of our proposed approach, especially when the labeled samples are very scarce.


2020 ◽  
Author(s):  
Andrew Fang ◽  
Jonathan Kia-Sheng Phua ◽  
Terrence Chiew ◽  
Daniel De-Liang Loh ◽  
Lincoln Ming Han Liow ◽  
...  

BACKGROUND During the Coronavirus Disease 2019 (COVID-19) outbreak, community care facilities (CCF) were set up as temporary out-of-hospital isolation facilities to contain the surge of cases in Singapore. Confined living spaces within CCFs posed an increased risk of communicable disease spread among residents. OBJECTIVE This inspired our healthcare team managing a CCF operation to design a low-cost communicable disease outbreak surveillance system (CDOSS). METHODS Our CDOSS was designed with the following considerations: (1) comprehensiveness, (2) efficiency through passive reconnoitering from electronic medical record (EMR) data, (3) ability to provide spatiotemporal insights, (4) low-cost and (5) ease of use. We used Python to develop a lightweight application – Python-based Communicable Disease Outbreak Surveillance System (PyDOSS) – that was able perform syndromic surveillance and fever monitoring. With minimal user actions, its data pipeline would generate daily control charts and geospatial heat maps of cases from raw EMR data and logged vital signs. PyDOSS was successfully implemented as part of our CCF workflow. We also simulated a gastroenteritis (GE) outbreak to test the effectiveness of the system. RESULTS PyDOSS was used throughout the entire duration of operation; the output was reviewed daily by senior management. No disease outbreaks were identified during our medical operation. In the simulated GE outbreak, PyDOSS was able to effectively detect an outbreak within 24 hours and provided information about cluster progression which could aid in contact tracing. The code for a stock version of PyDOSS has been made publicly available. CONCLUSIONS PyDOSS is an effective surveillance system which was successfully implemented in a real-life medical operation. With the system developed using open-source technology and the code made freely available, it significantly reduces the cost of developing and operating CDOSS and may be useful for similar temporary medical operations, or in resource-limited settings.


Author(s):  
Chenguang Li ◽  
Hongjun Yang ◽  
Long Cheng

AbstractAs a relatively new physiological signal of brain, functional near-infrared spectroscopy (fNIRS) is being used more and more in brain–computer interface field, especially in the task of motor imagery. However, the classification accuracy based on this signal is relatively low. To improve the accuracy of classification, this paper proposes a new experimental paradigm and only uses fNIRS signals to complete the classification task of six subjects. Notably, the experiment is carried out in a non-laboratory environment, and movements of motion imagination are properly designed. And when the subjects are imagining the motions, they are also subvocalizing the movements to prevent distraction. Therefore, according to the motor area theory of the cerebral cortex, the positions of the fNIRS probes have been slightly adjusted compared with other methods. Next, the signals are classified by nine classification methods, and the different features and classification methods are compared. The results show that under this new experimental paradigm, the classification accuracy of 89.12% and 88.47% can be achieved using the support vector machine method and the random forest method, respectively, which shows that the paradigm is effective. Finally, by selecting five channels with the largest variance after empirical mode decomposition of the original signal, similar classification results can be achieved.


Author(s):  
Hsein Kew

AbstractIn this paper, we propose a method to generate an audio output based on spectroscopy data in order to discriminate two classes of data, based on the features of our spectral dataset. To do this, we first perform spectral pre-processing, and then extract features, followed by machine learning, for dimensionality reduction. The features are then mapped to the parameters of a sound synthesiser, as part of the audio processing, so as to generate audio samples in order to compute statistical results and identify important descriptors for the classification of the dataset. To optimise the process, we compare Amplitude Modulation (AM) and Frequency Modulation (FM) synthesis, as applied to two real-life datasets to evaluate the performance of sonification as a method for discriminating data. FM synthesis provides a higher subjective classification accuracy as compared with to AM synthesis. We then further compare the dimensionality reduction method of Principal Component Analysis (PCA) and Linear Discriminant Analysis in order to optimise our sonification algorithm. The results of classification accuracy using FM synthesis as the sound synthesiser and PCA as the dimensionality reduction method yields a mean classification accuracies of 93.81% and 88.57% for the coffee dataset and the fruit puree dataset respectively, and indicate that this spectroscopic analysis model is able to provide relevant information on the spectral data, and most importantly, is able to discriminate accurately between the two spectra and thus provides a complementary tool to supplement current methods.


Author(s):  
Xinyi Li ◽  
Liqiong Chang ◽  
Fangfang Song ◽  
Ju Wang ◽  
Xiaojiang Chen ◽  
...  

This paper focuses on a fundamental question in Wi-Fi-based gesture recognition: "Can we use the knowledge learned from some users to perform gesture recognition for others?". This problem is also known as cross-target recognition. It arises in many practical deployments of Wi-Fi-based gesture recognition where it is prohibitively expensive to collect training data from every single user. We present CrossGR, a low-cost cross-target gesture recognition system. As a departure from existing approaches, CrossGR does not require prior knowledge (such as who is currently performing a gesture) of the target user. Instead, CrossGR employs a deep neural network to extract user-agnostic but gesture-related Wi-Fi signal characteristics to perform gesture recognition. To provide sufficient training data to build an effective deep learning model, CrossGR employs a generative adversarial network to automatically generate many synthetic training data from a small set of real-world examples collected from a small number of users. Such a strategy allows CrossGR to minimize the user involvement and the associated cost in collecting training examples for building an accurate gesture recognition system. We evaluate CrossGR by applying it to perform gesture recognition across 10 users and 15 gestures. Experimental results show that CrossGR achieves an accuracy of over 82.6% (up to 99.75%). We demonstrate that CrossGR delivers comparable recognition accuracy, but uses an order of magnitude less training samples collected from the end-users when compared to state-of-the-art recognition systems.


Sign in / Sign up

Export Citation Format

Share Document