scholarly journals Human Movement Detection using Recurrent Convolutional Neural Networks

Human Movement detection is vital in Tele-presence Robots, Animations, Games and Robotic movements. By using Traditional methods with the help of sensor suits it is difficult to find and interpret the movements. As it includes so much sensor data which is difficult to interpret, find the action and send to long distances. It is also very expensive and bulky too. Image processing and computer vision provides a solution to detect and interpret Human movement based on R-CNN approach. It is cheap, easy and light weight algorithm. It takes the video input and divides it in to frames, then it is Human body is separated for the background image. This paper mainly focused on skeleton, its major points and its relative positions in successive picture frames. A set of frames (Video) is given as input to the model, so that the model compares the coordinates of the successive frames and estimates the movement. First, the human is identified and separated from the rest of the image by drawing a bounding box around the human by using CNN (Convolution neural networks), then by applying R-CNN human is segmented and converted to skeleton. From the shape of the skeleton we can identify whether the skeleton is that of a human or not. Comparing the relative coordinates of skeletons extracted from frames photographed over time gives the movement of the human and its direction.

Author(s):  
BERNHARD R. KÄMMERER

We propose a method to incorporate the uncertainty of data in the computation process of neural networks. A measure of certainty is used on each input element in order to modulate the element's contribution to the whole input activity. The amount of certainty may result from knowledge about sensor data (e.g. detectable hardware faults or information from preprocessing steps) or may be determined in previous neurons. The method is developed and studied within the scope of the perceptron model and tested on an image processing application.


Author(s):  
Y.A. Hamad ◽  
K.V. Simonov ◽  
A.S. Kents

The paper considers general approaches to image processing, analysis of visual data and computer vision. The main methods for detecting features and edges associated with these approaches are presented. A brief description of modern edge detection and classification algorithms suitable for isolating and characterizing the type of pathology in the lungs in medical images is also given.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1685
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul

Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).


2021 ◽  
Vol 10 (6) ◽  
pp. 377
Author(s):  
Chiao-Ling Kuo ◽  
Ming-Hua Tsai

The importance of road characteristics has been highlighted, as road characteristics are fundamental structures established to support many transportation-relevant services. However, there is still huge room for improvement in terms of types and performance of road characteristics detection. With the advantage of geographically tiled maps with high update rates, remarkable accessibility, and increasing availability, this paper proposes a novel simple deep-learning-based approach, namely joint convolutional neural networks (CNNs) adopting adaptive squares with combination rules to detect road characteristics from roadmap tiles. The proposed joint CNNs are responsible for the foreground and background image classification and various types of road characteristics classification from previous foreground images, raising detection accuracy. The adaptive squares with combination rules help efficiently focus road characteristics, augmenting the ability to detect them and provide optimal detection results. Five types of road characteristics—crossroads, T-junctions, Y-junctions, corners, and curves—are exploited, and experimental results demonstrate successful outcomes with outstanding performance in reality. The information of exploited road characteristics with location and type is, thus, converted from human-readable to machine-readable, the results will benefit many applications like feature point reminders, road condition reports, or alert detection for users, drivers, and even autonomous vehicles. We believe this approach will also enable a new path for object detection and geospatial information extraction from valuable map tiles.


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4867
Author(s):  
Lu Chen ◽  
Hongjun Wang ◽  
Xianghao Meng

With the development of science and technology, neural networks, as an effective tool in image processing, play an important role in gradual remote-sensing image-processing. However, the training of neural networks requires a large sample database. Therefore, expanding datasets with limited samples has gradually become a research hotspot. The emergence of the generative adversarial network (GAN) provides new ideas for data expansion. Traditional GANs either require a large number of input data, or lack detail in the pictures generated. In this paper, we modify a shuffle attention network and introduce it into GAN to generate higher quality pictures with limited inputs. In addition, we improved the existing resize method and proposed an equal stretch resize method to solve the problem of image distortion caused by different input sizes. In the experiment, we also embed the newly proposed coordinate attention (CA) module into the backbone network as a control test. Qualitative indexes and six quantitative evaluation indexes were used to evaluate the experimental results, which show that, compared with other GANs used for picture generation, the modified Shuffle Attention GAN proposed in this paper can generate more refined and high-quality diversified aircraft pictures with more detailed features of the object under limited datasets.


Sign in / Sign up

Export Citation Format

Share Document