Intention Realization of Intermittent Angular Positions of Elbow using Myo Armband Sensor

Author(s):  
Amna Khan ◽  
Zareena Kausar
Keyword(s):  
Author(s):  
I. Mendez ◽  
B. W. Hansen ◽  
C. M. Grabow ◽  
E. J. L. Smedegaard ◽  
N. B. Skogberg ◽  
...  
Keyword(s):  

Author(s):  
Zongkai Fu ◽  
Huiyong Li ◽  
Zhenchao Ouyang ◽  
Xuefeng Liu ◽  
Jianwei Niu
Keyword(s):  

Author(s):  
Angga Rahagiyanto

Indonesian: Indonesian SIBI has been widely reviewed by researchers using different types of cameras and sensors. The ultimate goal is to produce a strong, fast and accurate movement recognition process. One that supports talk of movement using sensors on the MYO Armband tool. This paper explains how to use raw data generated from the MYO Armband sensor and extract integration so that it can be used to facilitate complete hand, arm and combination movements in the SIBI sign language dictionary. MYO armband uses five sensors: accelerometer, gyroscope, orientation, euler-orientation and EMG. Each sensor produces data that is different in scale and size. This requires a process to make the data uniform. This study uses the min-max method to normalize any data on the MYO Armband sensor and the Moment Invariant method to extract the vector features of hand movements. Testing is done using sign language Movement statistics both dynamic signals. Testing is done using cross validation.


2018 ◽  
Vol 27 ◽  
pp. 150-156 ◽  
Author(s):  
Shabnam Sadeghi Esfahlani ◽  
Bogdan Muresan ◽  
Alireza Sanaei ◽  
George Wilson

Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3099 ◽  
Author(s):  
Daniele Di Mitri ◽  
Jan Schneider ◽  
Marcus Specht ◽  
Hendrik Drachsler

This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body position and a Myo armband for collecting electromyogram information. We collected multimodal data from 11 medical students, each of them performing two sessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelled according to five performance indicators, corresponding to common CPR training mistakes. Three out of five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnne manikin. The remaining two, related to arms and body position, were annotated manually by the research team. We trained five neural networks for classifying each of the five indicators. The results of the experiment show that multimodal data can provide accurate mistake detection as compared to the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detect additional CPR training mistakes such as the correct use of arms and body weight. Thus far, these mistakes were identified only by human instructors. Finally, to investigate user feedback in the future implementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuable feedback aspects of CPR training.


Sign in / Sign up

Export Citation Format

Share Document