WR-Hand

Author(s):  
Yang Liu ◽  
Chengdong Lin ◽  
Zhenjiang Li

This paper presents WR-Hand, a wearable-based system tracking 3D hand pose of 14 hand skeleton points over time using Electromyography (EMG) and gyroscope sensor data from commercial armband. This system provides a significant leap in wearable sensing and enables new application potentials in medical care, human-computer interaction, etc. A challenge is the armband EMG sensors inevitably collect mixed EMG signals from multiple forearm muscles because of the fixed sensor positions on the device, while prior bio-medical models for hand pose tracking are built on isolated EMG signal inputs from isolated forearm spots for different muscles. In this paper, we leverage the recent success of neural networks to enhance the existing bio-medical model using the armband's EMG data and visualize our design to understand why our solution is effective. Moreover, we propose solutions to place the constructed hand pose reliably in a global coordinate system, and address two practical issues by providing a general plug-and-play version for new users without training and compensating for the position difference in how users wear their armbands. We implement a prototype using different commercial armbands, which is lightweight to execute on user's phone in real-time. Extensive evaluation shows the efficacy of the WR-Hand design.

2021 ◽  
Vol 15 (6) ◽  
pp. 1-17
Author(s):  
Chenglin Li ◽  
Carrie Lu Tong ◽  
Di Niu ◽  
Bei Jiang ◽  
Xiao Zuo ◽  
...  

Deep learning models for human activity recognition (HAR) based on sensor data have been heavily studied recently. However, the generalization ability of deep models on complex real-world HAR data is limited by the availability of high-quality labeled activity data, which are hard to obtain. In this article, we design a similarity embedding neural network that maps input sensor signals onto real vectors through carefully designed convolutional and Long Short-Term Memory (LSTM) layers. The embedding network is trained with a pairwise similarity loss, encouraging the clustering of samples from the same class in the embedded real space, and can be effectively trained on a small dataset and even on a noisy dataset with mislabeled samples. Based on the learned embeddings, we further propose both nonparametric and parametric approaches for activity recognition. Extensive evaluation based on two public datasets has shown that the proposed similarity embedding network significantly outperforms state-of-the-art deep models on HAR classification tasks, is robust to mislabeled samples in the training set, and can also be used to effectively denoise a noisy dataset.


2020 ◽  
Vol 68 ◽  
pp. 2713-2723 ◽  
Author(s):  
Sumit A. Raurale ◽  
John McAllister ◽  
Jesus Martinez del Rincon

2016 ◽  
Vol 55 (1) ◽  
pp. 013101
Author(s):  
Junyeong Choi ◽  
Jong-Il Park ◽  
Hanhoon Park

2014 ◽  
Vol 19 (6) ◽  
pp. 942-956
Author(s):  
Junyeong Choi ◽  
Jong-Il Park
Keyword(s):  

2021 ◽  
Author(s):  
Mohammed hashim B.A ◽  
Amutha R

Abstract Human Activity Recognition is the most popular research area in the pervasive computing field in recent years. Sensor data plays a vital role in identifying several human actions. Convolutional Neural Networks (CNNs) have now become the most recent technique in the computer vision phenomenon, but still it is premature to use CNN for sensor data, particularly in ubiquitous and wearable computing. In this paper, we have proposed the idea of transforming the raw accelerometer and gyroscope sensor data to the visual domain by using our novel activity image creation method (NAICM). Pre-trained CNN (AlexNet) has been used on the converted image domain information. The proposed method is evaluated on several online available human activity recognition dataset. The results show that the proposed novel activity image creation method (NAICM) has successfully created the activity images with a classification accuracy of 98.36% using pre trained CNN.


2019 ◽  
Vol 38 (10-11) ◽  
pp. 1286-1306 ◽  
Author(s):  
Adrian Battiston ◽  
Inna Sharf ◽  
Meyer Nahon

An extensive evaluation of attitude estimation algorithms in simulation and experiments is performed to determine their suitability for a collision recovery pipeline of a quadcopter unmanned aerial vehicle. A multiplicative extended Kalman filter (MEKF), unscented Kalman filter (UKF), complementary filter, [Formula: see text] filter, and novel adaptive varieties of the selected filters are compared. The experimental quadcopter uses a PixHawk flight controller, and the algorithms are implemented using data from only the PixHawk inertial measurement unit (IMU). Performance of the aforementioned filters is first evaluated in a simulation environment using modified sensor models to capture the effects of collision on inertial measurements. Simulation results help define the efficacy and use cases of the conventional and novel algorithms in a quadcopter collision scenario. An analogous evaluation is then conducted by post-processing logged sensor data from collision flight tests, to gain new insights into algorithms’ performance in the transition from simulated to real data. The post-processing evaluation compares each algorithm’s attitude estimate, including the stock attitude estimator of the PixHawk controller, to data collected by an offboard infrared motion capture system. Based on this evaluation, two promising algorithms, the MEKF and an adaptive [Formula: see text] filter, are selected for implementation on the physical quadcopter in the control loop of the collision recovery pipeline. Experimental results show an improvement in the metric used to evaluate experimental performance, the time taken to recover from the collision, when compared with the stock attitude estimator on the PixHawk (PX4) software.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3612 ◽  
Author(s):  
Vu Sang ◽  
Shiro Yano ◽  
Toshiyuki Kondo

Many motion sensor-based applications have been developed in recent years because they provide useful information about daily activities and current health status of users. However, most of these applications require knowledge of sensor positions. Therefore, this research focused on the problem of detecting sensor positions. We collected standing-still and walking sensor data at various body positions from ten subjects. The offset values were removed by subtracting the sensor data of standing-still phase from the walking data for each axis of each sensor unit. Our hierarchical classification technique is based on optimizing local classifiers. Many common features are computed, and informative features are selected for specific classifications. In this approach, local classifiers such as arm-side and hand-side discriminations yielded F1-scores of 0.99 and 1.00, correspondingly. Overall, the proposed method achieved an F1-score of 0.81 and 0.84 using accelerometers and gyroscopes, respectively. Furthermore, we also discuss contributive features and parameter tuning in this analysis.


Author(s):  
Yilin Liu ◽  
Shijia Zhang ◽  
Mahanth Gowda
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Chunlong Zhang ◽  
Hongtao He

The existing motion recognition system has a low athlete tracking recognition accuracy due to the poor processing effect of recognition algorithm for edge detection. A machine vision-based gymnast pose-tracking recognition system is designed for the above problem. The software part mainly optimizes the tracking recognition algorithm and uses the spatiotemporal graph convolution algorithm to construct the sequence graph structure of human joints, completes the strategy of label subset division, and completes the pose tracking according to the change of information dimension. The results of the system performance test show that the designed machine vision-based gymnast posture tracking recognition system can enhance the accuracy of tracking recognition and reduce the convergence time compared with the original system.


Sign in / Sign up

Export Citation Format

Share Document