Deep Learning Based Human to Human Interaction Detection Using Wireless Fidelity

2021 ◽  
pp. 22-31
Author(s):  
Ravi Hosamani ◽  
Shridhar Devamane ◽  
T Yerriswamy ◽  
Shreya Bagalwadi
Author(s):  
Titus Issac ◽  
Salaja Silas ◽  
Elijah Blessing Rajsingh

The 21st century is witnessing the emergence of a wide variety of wireless sensor network (WSN) applications ranging from simple environmental monitoring to complex satellite monitoring applications. The advent of complex WSN applications has led to a massive transition in the development, functioning, and capabilities of wireless sensor nodes. The contemporary nodes have multi-functional capabilities enabling the heterogeneous WSN applications. The future of WSN task assignment envisions WSN to be heterogeneous network with minimal human interaction. This led to the investigative model of a deep learning-based task assignment algorithm. The algorithm employs a multilayer feed forward neural network (MLFFNN) trained by particle swarm optimization (PSO) for solving task assignment problem in a dynamic centralized heterogeneous WSN. The analyses include the study of hidden layers and effectiveness of the task assignment algorithms. The chapter would be highly beneficial to a wide range of audiences employing the machine and deep learning in WSN.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1754
Author(s):  
József Sütő

Flying insect detection, identification, and counting are the key components of agricultural pest management. Insect identification is also one of the most challenging tasks in agricultural image processing. With the aid of machine vision and machine learning, traditional (manual) identification and counting can be automated. To achieve this goal, a particular data acquisition device and an accurate insect recognition algorithm (model) is necessary. In this work, we propose a new embedded system-based insect trap with an OpenMV Cam H7 microcontroller board, which can be used anywhere in the field without any restrictions (AC power supply, WIFI coverage, human interaction, etc.). In addition, we also propose a deep learning-based insect-counting method where we offer solutions for problems such as the “lack of data” and “false insect detection”. By means of the proposed trap and insect-counting method, spraying (pest swarming) could then be accurately scheduled.


2021 ◽  
Vol 192 ◽  
pp. 5093-5103
Author(s):  
Oumaima Moutik ◽  
Smail Tigani ◽  
Rachid Saadane ◽  
Abdellah Chehri

2021 ◽  
Vol 11 (24) ◽  
pp. 11938
Author(s):  
Denis Zherdev ◽  
Larisa Zherdeva ◽  
Sergey Agapov ◽  
Anton Sapozhnikov ◽  
Artem Nikonorov ◽  
...  

Human poses and the behaviour estimation for different activities in (virtual reality/augmented reality) VR/AR could have numerous beneficial applications. Human fall monitoring is especially important for elderly people and for non-typical activities with VR/AR applications. There are a lot of different approaches to improving the fidelity of fall monitoring systems through the use of novel sensors and deep learning architectures; however, there is still a lack of detail and diverse datasets for training deep learning fall detectors using monocular images. The issues with synthetic data generation based on digital human simulation were implemented and examined using the Unreal Engine. The proposed pipeline provides automatic “playback” of various scenarios for digital human behaviour simulation, and the result of a proposed modular pipeline for synthetic data generation of digital human interaction with the 3D environments is demonstrated in this paper. We used the generated synthetic data to train the Mask R-CNN-based segmentation of the falling person interaction area. It is shown that, by training the model with simulation data, it is possible to recognize a falling person with an accuracy of 97.6% and classify the type of person’s interaction impact. The proposed approach also allows for covering a variety of scenarios that can have a positive effect at a deep learning training stage in other human action estimation tasks in an VR/AR environment.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 161123-161130 ◽  
Author(s):  
An Gong ◽  
Chen Chen ◽  
Mengtang Peng

2018 ◽  
Author(s):  
Chaoxin Wang ◽  
Xukun Li ◽  
Doina Caragea ◽  
Raju Bheemanahalli ◽  
S.V. Krishna Jagadish

The aboveground plant efficiency has improved significantly in recent years, and the improvement has led to a steady increase in global food production. The improvement of belowground plant efficiency has the potential to further increase food production. However, the belowground plant roots are harder to study, due to inherent challenges presented by root phenotyping. Several tools for identifying root anatomical features in root cross-section images have been proposed. However, the existing tools are not fully automated and require significant human effort to produce accurate results. To address this limitation, we propose a fully automated approach, called Deep Learning for Root Anatomy (DL-RootAnatomy), for identifying anatomical traits in root cross-section images. Using the Faster Region-based Convolutional Neural Network (Faster R-CNN), the DL-RootAnatomy models detect objects such as root, stele and late metaxylem, and predict rectangular bounding boxes around such objects. Subsequently, the bounding boxes are used to estimate the root diameter, stele diameter, and late metaxylem number and average diameter. Experimental evaluation using standard object detection metrics, such as intersection-over-union and mean average precision, has shown that our models can accurately detect the root, stele and late metaxylem objects. Furthermore, the results have shown that the measurements estimated based on predicted bounding boxes have very small root mean square error when compared with the corresponding ground truth values, suggesting that DL-RootAnatomy can be used to accurately detect anatomical features. Finally, a comparison with existing approaches, which involve some degree of human interaction, has shown that the proposed approach is more accurate than existing approaches on a subset of our data. A webserver for performing root anatomy using our deep learning pre-trained models is available at https://rootanatomy.org, together with a link to a GitHub repository that contains code that can be used to re-train or fine-tune our network with other types of root-cross section images. The labeled images used for training and evaluating our models are also available from the GitHub repository.


Sign in / Sign up

Export Citation Format

Share Document