scholarly journals Towards Location Independent Gesture Recognition with Commodity WiFi Devices

Electronics ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1069
Author(s):  
Yong Lu ◽  
Shaohe Lv ◽  
Xiaodong Wang

Recently, WiFi-based gesture recognition has attracted increasing attention. Due to the sensitivity of WiFi signals to environments, an activity recognition model trained at a specific place can hardly work well for other places. To tackle this challenge, we propose WiHand, a location independent gesture recognition system based on commodity WiFi devices. Leveraging the low rank and sparse decomposition, WiHand separates gesture signal from background information, thus making it resilient to location variation. Extensive evaluations showed that WiHand can achieve an average accuracy of 93% for various locations. In addition, WiHand works well under through the wall scenario.

2018 ◽  
Vol 218 ◽  
pp. 02014
Author(s):  
Arief Ramadhani ◽  
Achmad Rizal ◽  
Erwin Susanto

Computer vision is one of the fields of research that can be applied in a various subject. One application of computer vision is the hand gesture recognition system. The hand gesture is one of the ways to interact with computers or machines. In this study, hand gesture recognition was used as a password for electronic key systems. The hand gesture recognition in this study utilized the depth sensor in Microsoft Kinect Xbox 360. Depth sensor captured the hand image and segmented using a threshold. By scanning each pixel, we detected the thumb and the number of other fingers that open. The hand gesture recognition result was used as a password to unlock the electronic key. This system could recognize nine types of hand gesture represent number 1, 2, 3, 4, 5, 6, 7, 8, and 9. The average accuracy of the hand gesture recognition system was 97.78% for one single hand sign and 86.5% as password of three hand signs.


2021 ◽  
Vol 336 ◽  
pp. 06003
Author(s):  
Na Wu ◽  
Hao JIN ◽  
Xiachuan Pei ◽  
Shurong Dong ◽  
Jikui Luo ◽  
...  

Surface electromyography (sEMG), as a key technology of non-invasive muscle computer interface, is an important method of human-computer interaction. We proposed a CNN-IndRNN (Convolutional Neural Network-Independent Recurrent Neural Network) hybrid algorithm to analyse sEMG signals and classify hand gestures. Ninapro’s dataset of 10 volunteers was used to develop the model, and by using only one time-domain feature (root mean square of sEMG), an average accuracy of 87.43% on 18 gestures is achieved. The proposed algorithm obtains a state-of-the-art classification performance with a significantly reduced model. In order to verify the robustness of the CNN-IndRNN model, a compact real¬time recognition system was constructed. The system was based on open-source hardware (OpenBCI) and a custom Python-based software. Results show that the 10-subject rock-paper-scissors gesture recognition accuracy reaches 99.1%.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Linlin Guo ◽  
Lei Wang ◽  
Jialin Liu ◽  
Wei Zhou ◽  
Bingxian Lu

The joint of WiFi-based and vision-based human activity recognition has attracted increasing attention in the human-computer interaction, smart home, and security monitoring fields. We propose HuAc, the combination of WiFi-based and Kinect-based activity recognition system, to sense human activity in an indoor environment with occlusion, weak light, and different perspectives. We first construct a WiFi-based activity recognition dataset named WiAR to provide a benchmark for WiFi-based activity recognition. Then, we design a mechanism of subcarrier selection according to the sensitivity of subcarriers to human activities. Moreover, we optimize the spatial relationship of adjacent skeleton joints and draw out a corresponding relationship between CSI and skeleton-based activity recognition. Finally, we explore the fusion information of CSI and crowdsourced skeleton joints to achieve the robustness of human activity recognition. We implemented HuAc using commercial WiFi devices and evaluated it in three kinds of scenarios. Our results show that HuAc achieves an average accuracy of greater than 93% using WiAR dataset.


2020 ◽  
Vol 5 (2) ◽  
pp. 609
Author(s):  
Segun Aina ◽  
Kofoworola V. Sholesi ◽  
Aderonke R. Lawal ◽  
Samuel D. Okegbile ◽  
Adeniran I. Oluwaranti

This paper presents the application of Gaussian blur filters and Support Vector Machine (SVM) techniques for greeting recognition among the Yoruba tribe of Nigeria. Existing efforts have considered different recognition gestures. However, tribal greeting postures or gestures recognition for the Nigerian geographical space has not been studied before. Some cultural gestures are not correctly identified by people of the same tribe, not to mention other people from different tribes, thereby posing a challenge of misinterpretation of meaning. Also, some cultural gestures are unknown to most people outside a tribe, which could also hinder human interaction; hence there is a need to automate the recognition of Nigerian tribal greeting gestures. This work hence develops a Gaussian Blur – SVM based system capable of recognizing the Yoruba tribe greeting postures for men and women. Videos of individuals performing various greeting gestures were collected and processed into image frames. The images were resized and a Gaussian blur filter was used to remove noise from them. This research used a moment-based feature extraction algorithm to extract shape features that were passed as input to SVM. SVM is exploited and trained to perform the greeting gesture recognition task to recognize two Nigerian tribe greeting postures. To confirm the robustness of the system, 20%, 25% and 30% of the dataset acquired from the preprocessed images were used to test the system. A recognition rate of 94% could be achieved when SVM is used, as shown by the result which invariably proves that the proposed method is efficient.


2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


2021 ◽  
Vol 17 (7) ◽  
pp. 155014772110248
Author(s):  
Miaoyu Li ◽  
Zhuohan Jiang ◽  
Yutong Liu ◽  
Shuheng Chen ◽  
Marcin Wozniak ◽  
...  

Physical health diseases caused by wrong sitting postures are becoming increasingly serious and widespread, especially for sedentary students and workers. Existing video-based approaches and sensor-based approaches can achieve high accuracy, while they have limitations like breaching privacy and relying on specific sensor devices. In this work, we propose Sitsen, a non-contact wireless-based sitting posture recognition system, just using radio frequency signals alone, which neither compromises the privacy nor requires using various specific sensors. We demonstrate that Sitsen can successfully recognize five habitual sitting postures with just one lightweight and low-cost radio frequency identification tag. The intuition is that different postures induce different phase variations. Due to the received phase readings are corrupted by the environmental noise and hardware imperfection, we employ series of signal processing schemes to obtain clean phase readings. Using the sliding window approach to extract effective features of the measured phase sequences and employing an appropriate machine learning algorithm, Sitsen can achieve robust and high performance. Extensive experiments are conducted in an office with 10 volunteers. The result shows that our system can recognize different sitting postures with an average accuracy of 97.02%.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 222
Author(s):  
Tao Li ◽  
Chenqi Shi ◽  
Peihao Li ◽  
Pengpeng Chen

In this paper, we propose a novel gesture recognition system based on a smartphone. Due to the limitation of Channel State Information (CSI) extraction equipment, existing WiFi-based gesture recognition is limited to the microcomputer terminal equipped with Intel 5300 or Atheros 9580 network cards. Therefore, accurate gesture recognition can only be performed in an area relatively fixed to the transceiver link. The new gesture recognition system proposed by us breaks this limitation. First, we use nexmon firmware to obtain 256 CSI subcarriers from the bottom layer of the smartphone in IEEE 802.11ac mode on 80 MHz bandwidth to realize the gesture recognition system’s mobility. Second, we adopt the cross-correlation method to integrate the extracted CSI features in the time and frequency domain to reduce the influence of changes in the smartphone location. Third, we use a new improved DTW algorithm to classify and recognize gestures. We implemented vast experiments to verify the system’s recognition accuracy at different distances in different directions and environments. The results show that the system can effectively improve the recognition accuracy.


Sign in / Sign up

Export Citation Format

Share Document