scholarly journals Stretchable Filler/Solid Rubber Piezoresistive Thread Sensor for Gesture Recognition

Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 7
Author(s):  
Penghua Zhu ◽  
Jie Zhu ◽  
Xiaofei Xue ◽  
Yongtao Song

Recently, the stretchable piezoresistive composites have become a focus in the fields of the biomechanical sensing and human posture recognition because they can be directly and conformally attached to bodies and clothes. Here, we present a stretchable piezoresistive thread sensor (SPTS) based on Ag plated glass microspheres (Ag@GMs)/solid rubber (SR) composite, which was prepared using new shear dispersion and extrusion vulcanization technology. The SPTS has the high gauge factors (7.8~11.1) over a large stretching range (0–50%) and approximate linear curves about the relative change of resistance versus the applied strain. Meanwhile, the SPTS demonstrates that the hysteresis is as low as 2.6% and has great stability during 1000 stretching/releasing cycles at 50% strain. Considering the excellent mechanical strain-driven characteristic, the SPTS was carried out to monitor posture recognitions and facial movements. Moreover, the novel SPTS can be successfully integrated with software and hardware information modules to realize an intelligent gesture recognition system, which can promptly and accurately reflect the produced electrical signals about digital gestures, and successfully be translated into text and voice. This work demonstrates great progress in stretchable piezoresistive sensors and provides a new strategy for achieving a real-time and effective-communication intelligent gesture recognition system.

Author(s):  
D. A. Kalina ◽  
R. V. Golovanov ◽  
D. V. Vorotnev

We present the monocamera approach of static hand gestures recognition based on skeletonization. The problem of creating skeleton of the human’s hand, as well as body, became solvable a few years ago after inventing so called convolutional pose machines – the novel architecture of artificial neural network. Our solution uses such kind of pretrained convolutional artificial network for extracting hand joints keypoints with further skeleton reconstruction. In this work we also propose special skeleton descriptor with proving its stability and distinguishability in terms of classification. We considered a few widespread machine learning algorithms to build and verify different classifiers. The quality of the classifier’s recognition is estimated using the wellknown Accuracy metric, which identified that classical SVM (Support Vector Machines) with radial basis kernel gives the best results. The testing of the whole system was conducted using public databases containing about 3000 of test images for more than 10 types of gestures. The results of a comparative analysis of the proposed system with existing approaches are demonstrated. It is shown that our gesture recognition system provides better quality in comparison with existing solutions. The performance of the proposed system was estimated for two configurations of standard personal computer: with CPU (Central Processing Unit) only and with GPU (Graphics Processing Unit) in addition where the latest one provides realtime processing with up to 60 frames per second. Thus we demonstrate that the proposed approach can find an application in the practice.


Author(s):  
M. Favorskaya ◽  
A. Nosov ◽  
A. Popov

Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.


2014 ◽  
Vol 511-512 ◽  
pp. 936-940
Author(s):  
Yi Zhang ◽  
Sheng Hui Li ◽  
Yuan Luo

Aim to the traditional acceleration gesture recognition system on PC platform had the problem of high power consumption, hard to carry and low recognition rate, the paper proposes a novel gesture recognition algorithm. The algorithm first sampled the gestures signal acceleration by acceleration sensor, and then segmented and smoothing filtered the collected original signal. After preprocessing, extracted the feature value and segmented the feature value according to segments signal energy. Finally for all the segments used the improved DTW(Dynamic Time Warping) algorithm[1] to match the extracted signal features with the predefined template feature respectively and integrated the matching results of them, then concluded the final recognition results. We apply the proposed algorithm to the smartphone and test the system. Testing result shows that: The novel algorithm can improve the recognition rate and enable the system to real-time and accuracy recognizes gestures.


2020 ◽  
Vol 5 (2) ◽  
pp. 609
Author(s):  
Segun Aina ◽  
Kofoworola V. Sholesi ◽  
Aderonke R. Lawal ◽  
Samuel D. Okegbile ◽  
Adeniran I. Oluwaranti

This paper presents the application of Gaussian blur filters and Support Vector Machine (SVM) techniques for greeting recognition among the Yoruba tribe of Nigeria. Existing efforts have considered different recognition gestures. However, tribal greeting postures or gestures recognition for the Nigerian geographical space has not been studied before. Some cultural gestures are not correctly identified by people of the same tribe, not to mention other people from different tribes, thereby posing a challenge of misinterpretation of meaning. Also, some cultural gestures are unknown to most people outside a tribe, which could also hinder human interaction; hence there is a need to automate the recognition of Nigerian tribal greeting gestures. This work hence develops a Gaussian Blur – SVM based system capable of recognizing the Yoruba tribe greeting postures for men and women. Videos of individuals performing various greeting gestures were collected and processed into image frames. The images were resized and a Gaussian blur filter was used to remove noise from them. This research used a moment-based feature extraction algorithm to extract shape features that were passed as input to SVM. SVM is exploited and trained to perform the greeting gesture recognition task to recognize two Nigerian tribe greeting postures. To confirm the robustness of the system, 20%, 25% and 30% of the dataset acquired from the preprocessed images were used to test the system. A recognition rate of 94% could be achieved when SVM is used, as shown by the result which invariably proves that the proposed method is efficient.


2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 222
Author(s):  
Tao Li ◽  
Chenqi Shi ◽  
Peihao Li ◽  
Pengpeng Chen

In this paper, we propose a novel gesture recognition system based on a smartphone. Due to the limitation of Channel State Information (CSI) extraction equipment, existing WiFi-based gesture recognition is limited to the microcomputer terminal equipped with Intel 5300 or Atheros 9580 network cards. Therefore, accurate gesture recognition can only be performed in an area relatively fixed to the transceiver link. The new gesture recognition system proposed by us breaks this limitation. First, we use nexmon firmware to obtain 256 CSI subcarriers from the bottom layer of the smartphone in IEEE 802.11ac mode on 80 MHz bandwidth to realize the gesture recognition system’s mobility. Second, we adopt the cross-correlation method to integrate the extracted CSI features in the time and frequency domain to reduce the influence of changes in the smartphone location. Third, we use a new improved DTW algorithm to classify and recognize gestures. We implemented vast experiments to verify the system’s recognition accuracy at different distances in different directions and environments. The results show that the system can effectively improve the recognition accuracy.


Author(s):  
Xinyi Li ◽  
Liqiong Chang ◽  
Fangfang Song ◽  
Ju Wang ◽  
Xiaojiang Chen ◽  
...  

This paper focuses on a fundamental question in Wi-Fi-based gesture recognition: "Can we use the knowledge learned from some users to perform gesture recognition for others?". This problem is also known as cross-target recognition. It arises in many practical deployments of Wi-Fi-based gesture recognition where it is prohibitively expensive to collect training data from every single user. We present CrossGR, a low-cost cross-target gesture recognition system. As a departure from existing approaches, CrossGR does not require prior knowledge (such as who is currently performing a gesture) of the target user. Instead, CrossGR employs a deep neural network to extract user-agnostic but gesture-related Wi-Fi signal characteristics to perform gesture recognition. To provide sufficient training data to build an effective deep learning model, CrossGR employs a generative adversarial network to automatically generate many synthetic training data from a small set of real-world examples collected from a small number of users. Such a strategy allows CrossGR to minimize the user involvement and the associated cost in collecting training examples for building an accurate gesture recognition system. We evaluate CrossGR by applying it to perform gesture recognition across 10 users and 15 gestures. Experimental results show that CrossGR achieves an accuracy of over 82.6% (up to 99.75%). We demonstrate that CrossGR delivers comparable recognition accuracy, but uses an order of magnitude less training samples collected from the end-users when compared to state-of-the-art recognition systems.


Open Medicine ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. 749-753
Author(s):  
Wenyuan Li ◽  
Beibei Huang ◽  
Qiang Shen ◽  
Shouwei Jiang ◽  
Kun Jin ◽  
...  

Abstract In recent months, the novel coronavirus disease 2019 (COVID-19) pandemic has become a major public health crisis with takeover more than 1 million lives worldwide. The long-lasting existence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has not yet been reported. Herein, we report a case of SARS-CoV-2 infection with intermittent viral polymerase chain reaction (PCR)-positive for >4 months after clinical rehabilitation. A 35-year-old male was diagnosed with COVID-19 pneumonia with fever but without other specific symptoms. The treatment with lopinavir-ritonavir, oxygen inhalation, and other symptomatic supportive treatment facilitated recovery, and the patient was discharged. However, his viral PCR test was continually positive in oropharyngeal swabs for >4 months after that. At the end of June 2020, he was still under quarantine and observation. The contribution of current antivirus therapy might be limited. The prognosis of COVID-19 patients might be irrelevant to the virus status. Thus, further investigation to evaluate the contagiousness of convalescent patients and the mechanism underlying the persistent existence of SARS-CoV-2 after recovery is essential. A new strategy of disease control, especially extending the follow-up period for recovered COVID-19 patients, is necessary to adapt to the current situation of pandemic.


Sign in / Sign up

Export Citation Format

Share Document