scholarly journals Foot Gesture Recognition Using High-Compression Radar Signature Image and Deep Learning

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3937
Author(s):  
Seungeon Song ◽  
Bongseok Kim ◽  
Sangdong Kim ◽  
Jonghun Lee

Recently, Doppler radar-based foot gesture recognition has attracted attention as a hands-free tool. Doppler radar-based recognition for various foot gestures is still very challenging. So far, no studies have yet dealt deeply with recognition of various foot gestures based on Doppler radar and a deep learning model. In this paper, we propose a method of foot gesture recognition using a new high-compression radar signature image and deep learning. By means of a deep learning AlexNet model, a new high-compression radar signature is created by extracting dominant features via Singular Value Decomposition (SVD) processing; four different foot gestures including kicking, swinging, sliding, and tapping are recognized. Instead of using an original radar signature, the proposed method improves the memory efficiency required for deep learning training by using a high-compression radar signature. Original and reconstructed radar images with high compression values of 90%, 95%, and 99% were applied for the deep learning AlexNet model. As experimental results, movements of all four different foot gestures and of a rolling baseball were recognized with an accuracy of approximately 98.64%. In the future, due to the radar’s inherent robustness to the surrounding environment, this foot gesture recognition sensor using Doppler radar and deep learning will be widely useful in future automotive and smart home industry fields.

2021 ◽  
Author(s):  
Anastase Charantonis ◽  
Vincent Bouget ◽  
Dominique Béréziat ◽  
Julien Brajard ◽  
Arthur Filoche

<p>Short or mid-term rainfall forecasting is a major task with several environmental applications such as agricultural management or flood risks monitoring. Existing data-driven approaches, especially deep learning models, have shown significant skill at this task, using only rainfall radar images as inputs. In order to determine whether using other meteorological parameters such as wind would improve forecasts, we trained a deep learning model on a fusion of rainfall radar images and wind velocity produced by a weather forecast model. The network was compared to a similar architecture trained only on radar data, to a basic persistence model and to an approach based on optical flow. Our network outperforms by 8% the F1-score calculated for the optical flow on moderate and higher rain events for forecasts at a horizon time of 30 minutes. Furthermore, it outperforms by 7% the same architecture trained using only rainfall radar images. Merging rain and wind data has also proven to stabilize the training process and enabled significant improvement especially on the difficult-to-predict high precipitation rainfalls. These results can also be found in Bouget, V., Béréziat, D., Brajard, J., Charantonis, A., & Filoche, A. (2020). Fusion of rain radar images and wind forecasts in a deep learning model applied to rain nowcasting. arXiv preprint arXiv:2012.05015</p>


Author(s):  
Hovannes Kulhandjian ◽  
Prakshi Sharma ◽  
Michel Kulhandjian ◽  
Claude D'Amours

2022 ◽  
Vol 2022 ◽  
pp. 1-22
Author(s):  
Olutosin Taiwo ◽  
Absalom E. Ezugwu ◽  
Olaide N. Oyelade ◽  
Mubarak S. Almutairi

Security of lives and properties is highly important for enhanced quality living. Smart home automation and its application have received much progress towards convenience, comfort, safety, and home security. With the advances in technology and the Internet of Things (IoT), the home environment has witnessed an improved remote control of appliances, monitoring, and home security over the internet. Several home automation systems have been developed to monitor movements in the home and report to the user. Existing home automation systems detect motion and have surveillance for home security. However, the logical aspect of averting unnecessary or fake notifications is still a major area of challenge. Intelligent response and monitoring make smart home automation efficient. This work presents an intelligent home automation system for controlling home appliances, monitoring environmental factors, and detecting movement in the home and its surroundings. A deep learning model is proposed for motion recognition and classification based on the detected movement patterns. Using a deep learning model, an algorithm is developed to enhance the smart home automation system for intruder detection and forestall the occurrence of false alarms. A human detected by the surveillance camera is classified as an intruder or home occupant based on his walking pattern. The proposed method’s prototype was implemented using an ESP32 camera for surveillance, a PIR motion sensor, an ESP8266 development board, a 5 V four-channel relay module, and a DHT11 temperature and humidity sensor. The environmental conditions measured were evaluated using a mathematical model for the response time to effectively show the accuracy of the DHT sensor for weather monitoring and future prediction. An experimental analysis of human motion patterns was performed using the CNN model to evaluate the classification for the detection of humans. The CNN classification model gave an accuracy of 99.8%.


2021 ◽  
Author(s):  
Albert Rego ◽  
Pedro Luis González Ramírez ◽  
Jose M. Jimenez ◽  
Jaime Lloret

AbstractInternet of Things (IoT) has introduced new applications and environments. Smart Home provides new ways of communication and service consumption. In addition, Artificial Intelligence (AI) and deep learning have improved different services and tasks by automatizing them. In this field, reinforcement learning (RL) provides an unsupervised way to learn from the environment. In this paper, a new intelligent system based on RL and deep learning is proposed for Smart Home environments to guarantee good levels of QoE, focused on multimedia services. This system is aimed to reduce the impact on user experience when the classifying system achieves a low accuracy. The experiments performed show that the deep learning model proposed achieves better accuracy than the KNN algorithm and that the RL system increases the QoE of the user up to 3.8 on a scale of 10.


2021 ◽  
Vol 38 (3) ◽  
pp. 565-572
Author(s):  
Yukun Jia ◽  
Rongtao Ding ◽  
Wei Ren ◽  
Jianfeng Shu ◽  
Aixiang Jin

During rehabilitation, many postoperative patients need to perform autonomous massage on time and on demand. Thus, this paper develops an individualized, intelligent, and independent rehabilitation training system for based on image feature deep learning model acupoint massage that excludes human factors. The system, which innovatively integrates massage gesture recognition with human pose recognition. It relies on the binocular depth camera Kinect DK and Google MediaPipe Holistic pipeline to collect the real-time image feature data on joints and gestures of the patient in autonomous massage. Then the system calculates the coordinates of each finger joint, and computes the human poses with VGG-16, a convolutional neural network (CNN); the calculated results are translated, and presented in a virtual reality (VR) model based on Unity 3D, aiming to guide the patient actions in autonomous massage. This is because the image feature of the gesture recognition and pose recognition is hindered, when the hand or the human is occluded by the body or other things, owing to the limited recognition range of the hardware. The experimental results show that, the proposed system could correctly recognize up to 84% of non-occluded gestures, and up to 93% of non-occluded poses; the system also exhibited a good real-time performance, a high operability, and a low cost. Facing the lack of medical staff, our system can effectively improve the life quality of patients.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6451
Author(s):  
Nadia Nasri ◽  
Sergio Orts-Escolano ◽  
Miguel Cazorla

In recent years the advances in Artificial Intelligence (AI) have been seen to play an important role in human well-being, in particular enabling novel forms of human-computer interaction for people with a disability. In this paper, we propose a sEMG-controlled 3D game that leverages a deep learning-based architecture for real-time gesture recognition. The 3D game experience developed in the study is focused on rehabilitation exercises, allowing individuals with certain disabilities to use low-cost sEMG sensors to control the game experience. For this purpose, we acquired a novel dataset of seven gestures using the Myo armband device, which we utilized to train the proposed deep learning model. The signals captured were used as an input of a Conv-GRU architecture to classify the gestures. Further, we ran a live system with the participation of different individuals and analyzed the neural network’s classification for hand gestures. Finally, we also evaluated our system, testing it for 20 rounds with new participants and analyzed its results in a user study.


Sign in / Sign up

Export Citation Format

Share Document