scholarly journals Prediction of Human Activities Based on a New Structure of Skeleton Features and Deep Learning Model

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4944
Author(s):  
Neziha Jaouedi ◽  
Francisco J. Perales ◽  
José Maria Buades ◽  
Noureddine Boujnah ◽  
Med Salim Bouhlel

The recognition of human activities is usually considered to be a simple procedure. Problems occur in complex scenes involving high speeds. Activity prediction using Artificial Intelligence (AI) by numerical analysis has attracted the attention of several researchers. Human activities are an important challenge in various fields. There are many great applications in this area, including smart homes, assistive robotics, human–computer interactions, and improvements in protection in several areas such as security, transport, education, and medicine through the control of falling or aiding in medication consumption for elderly people. The advanced enhancement and success of deep learning techniques in various computer vision applications encourage the use of these methods in video processing. The human presentation is an important challenge in the analysis of human behavior through activity. A person in a video sequence can be described by their motion, skeleton, and/or spatial characteristics. In this paper, we present a novel approach to human activity recognition from videos using the Recurrent Neural Network (RNN) for activity classification and the Convolutional Neural Network (CNN) with a new structure of the human skeleton to carry out feature presentation. The aims of this work are to improve the human presentation through the collection of different features and the exploitation of the new RNN structure for activities. The performance of the proposed approach is evaluated by the RGB-D sensor dataset CAD-60. The experimental results show the performance of the proposed approach through the average error rate obtained (4.5%).

Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 99
Author(s):  
Sultan Daud Khan ◽  
Louai Alarabi ◽  
Saleh Basalamah

COVID-19 caused the largest economic recession in the history by placing more than one third of world’s population in lockdown. The prolonged restrictions on economic and business activities caused huge economic turmoil that significantly affected the financial markets. To ease the growing pressure on the economy, scientists proposed intermittent lockdowns commonly known as “smart lockdowns”. Under smart lockdown, areas that contain infected clusters of population, namely hotspots, are placed on lockdown, while economic activities are allowed to operate in un-infected areas. In this study, we proposed a novel deep learning prediction framework for the accurate prediction of hotpots. We exploit the benefits of two deep learning models, i.e., Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) and propose a hybrid framework that has the ability to extract multi time-scale features from convolutional layers of CNN. The multi time-scale features are then concatenated and provide as input to 2-layers LSTM model. The LSTM model identifies short, medium and long-term dependencies by learning the representation of time-series data. We perform a series of experiments and compare the proposed framework with other state-of-the-art statistical and machine learning based prediction models. From the experimental results, we demonstrate that the proposed framework beats other existing methods with a clear margin.


Author(s):  
Antoine Van Biesbroeck ◽  
Feifei Shang ◽  
David Bassir

Computer aided design (CAD) models are widely employed in the current computer aided engineering or finite element analysis (FEA) systems that necessitate an optimal meshing as a function of their geometry. To this effect, the sub-mapping method is advantageous, as it segments the CAD model into different sub-parts, with the aim mesh them independently. Many of the existing 3D shape segmentation methods in literature are not suited to CAD models. Therefore, we propose a novel approach for the segmentation of CAD models by harnessing deep learning technologies. First, we refined the model and extracted local geometric features from its shape. Subsequently, we devised a convolutional neural network (CNN)-inspired neural network trained with a custom dataset. Experimental results demonstrate the robustness of our approach and its potential to adapt to augmented datasets in future.


2020 ◽  
Author(s):  
Y.L. Wang ◽  
Y.-C. Lin

AbstractCells interact mechanically with their surrounding by exerting forces and sensing forces or force-induced displacements. Traction force microscopy (TFM), purported to map cell-generated forces or stresses, represents an important tool that has powered the rapid advances in mechanobiology. However, to solve the ill-posted mathematical problem, its implementation has involved regularization and the associated compromises in accuracy and resolution. Here we applied neural network-based deep learning as a novel approach for TFM. We modified a network for processing images to process vector fields of stress and strain. Furthermore, we adapted a mathematical model for cell migration to generate large sets of simulated stresses and strains for training the network. We found that deep learning-based TFM yielded results qualitatively similar to those from conventional methods but at a higher accuracy and resolution. The speed and performance of deep learning TFM make it an appealing alternative to conventional methods for characterizing mechanical interactions between cells and the environment.Statement of SignificanceTraction Force Microscopy has served as a fundamental driving force for mechanobiology. However, its nature as an ill-posed inverse problem has posed serious challenges for conventional mathematical approaches. The present study, facilitated by large sets of simulated stresses and strains, describes a novel approach using deep learning for the calculation of traction stress distribution. By adapting the UNet neural network for handling vector fields, we show that deep learning is able to minimize much of the limitations of conventional approaches to generate results with speed, accuracy, and resolution.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bin Zheng ◽  
Tao Huang

In order to achieve the accuracy of mango grading, a mango grading system was designed by using the deep learning method. The system mainly includes CCD camera image acquisition, image preprocessing, model training, and model evaluation. Aiming at the traditional deep learning, neural network training needs a large number of sample data sets; a convolutional neural network is proposed to realize the efficient grading of mangoes through the continuous adjustment and optimization of super-parameters and batch size. The ultra-lightweight SqueezeNet related algorithm is introduced. Compared with AlexNet and other related algorithms with the same accuracy level, it has the advantages of small model scale and fast operation speed. The experimental results show that the convolutional neural network model after super-parameters optimization and adjustment has excellent effect on deep learning image processing of small sample data set. Two hundred thirty-four Jinhuang mangoes of Panzhihua were picked in the natural environment and tested. The analysis results can meet the requirements of the agricultural industry standard of the People’s Republic of China—mango and mango grade specification. At the same time, the average accuracy rate was 97.37%, the average error rate was 2.63%, and the average loss value of the model was 0.44. The processing time of an original image with a resolution of 500 × 374 was only 2.57 milliseconds. This method has important theoretical and application value and can provide a powerful means for mango automatic grading.


Author(s):  
S. Karthickkumar ◽  
K. Kumar

In recent years, deep learning for human action recognition is one of the most popular researches. It has a variety of applications such as surveillance, health care, and consumer behavior analysis, robotics. In this paper to propose a Two-Dimensional (2D) Convolutional Neural Network for recognizing Human Activities. Here the WISDM dataset is used to tarin and test the data. It can have the Activities like sitting, standing and downstairs, upstairs, running. The human activity recognition performance of our 2D-CNN based method which shows 93.17% accuracy.


Author(s):  
Nina Narodytska

Understanding properties of deep neural networks is an important challenge in deep learning. Deep learning networks are among the most successful artificial intelligence technologies that is making impact in a variety of practical applications. However, many concerns were raised about `magical' power of these networks. It is disturbing that we are really lacking of understanding of the decision making process behind this technology. Therefore, a natural question is whether we can trust decisions that neural networks make. One way to address this issue is to define properties that we want a neural network to satisfy. Verifying whether a neural network fulfills these properties sheds light on the properties of the function that it represents. In this work, we take the verification approach. Our goal is to design a framework for analysis of properties of neural networks. We start by defining a set of interesting properties to analyze. Then we focus on Binarized Neural Networks that can be represented and analyzed using well-developed means of Boolean Satisfiability and Integer Linear Programming. One of our main results is an exact representation of a binarized neural network as a Boolean formula. We also discuss how we can take advantage of the structure of neural networks in the search procedure.


Author(s):  
Kalyani A. Sonwane

Self-driving cars became a trending subject with a big improvement in technologies within the last decade. The project aims to coach a neural network to drive associate degree autonomous automobile agent on Udacity’s automobile Simulator's tracks. Udacity has discharged the machine as ASCII text file computer code and enthusiasts have hosted a contest (challenge) to show an automobile the way to drive victimisation solely camera pictures and deep learning. Autonomously driving an automobile needs learning to regulate steering angle, throttle and brakes. The activity biological research technique is employed to mimic human driving behaviour within the coaching model on the track. which means a dataset is generated within the machine by a user-driven automobile in coaching mode, and therefore the deep neural network model then drives the automobile in autonomous mode. 3 architectures area unit compared regarding their performance. Though the models performed well for the track it had been trained with, the important challenge was to generalize this behaviour on a second track out there on the machine. The dataset for Track_1, that was straightforward with favourable road conditions to drive, was used because the coaching set to drive the automobile autonomously on Track_2, consisting of sharp turns, barriers, elevations, and shadows. Image process and completely different augmentation techniques were accustomed tackle this downside, that allowed extracting the maximum amount data and options within the knowledge as doable. Ultimately, the automobile was ready to run on Track_2 generalizing well. The project aims at reaching an equivalent accuracy on period of time knowledge within the future.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
L. Hervé ◽  
D. C. A. Kraemer ◽  
O. Cioni ◽  
O. Mandula ◽  
M. Menneteau ◽  
...  

AbstractA lens-free microscope is a simple imaging device performing in-line holographic measurements. In the absence of focusing optics, a reconstruction algorithm is used to retrieve the sample image by solving the inverse problem. This is usually performed by optimization algorithms relying on gradient computation. However the presence of local minima leads to unsatisfactory convergence when phase wrapping errors occur. This is particularly the case in large optical thickness samples, for example cells in suspension and cells undergoing mitosis. To date, the occurrence of phase wrapping errors in the holographic reconstruction limits the application of lens-free microscopy in live cell imaging. To overcome this issue, we propose a novel approach in which the reconstruction alternates between two approaches, an inverse problem optimization and deep learning. The computation starts with a first reconstruction guess of the cell sample image. The result is then fed into a neural network, which is trained to correct phase wrapping errors. The neural network prediction is next used as the initialization of a second and last reconstruction step, which corrects to a certain extent the neural network prediction errors. We demonstrate the applicability of this approach in solving the phase wrapping problem occurring with cells in suspension at large densities. This is a challenging sample that typically cannot be reconstructed without phase wrapping errors, when using inverse problem optimization alone.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3845
Author(s):  
Ankita ◽  
Shalli Rani ◽  
Himanshi Babbar ◽  
Sonya Coleman ◽  
Aman Singh ◽  
...  

Traditional pattern recognition approaches have gained a lot of popularity. However, these are largely dependent upon manual feature extraction, which makes the generalized model obscure. The sequences of accelerometer data recorded can be classified by specialized smartphones into well known movements that can be done with human activity recognition. With the high success and wide adaptation of deep learning approaches for the recognition of human activities, these techniques are widely used in wearable devices and smartphones to recognize the human activities. In this paper, convolutional layers are combined with long short-term memory (LSTM), along with the deep learning neural network for human activities recognition (HAR). The proposed model extracts the features in an automated way and categorizes them with some model attributes. In general, LSTM is alternative form of recurrent neural network (RNN) which is famous for temporal sequences’ processing. In the proposed architecture, a dataset of UCI-HAR for Samsung Galaxy S2 is used for various human activities. The CNN classifier, which should be taken single, and LSTM models should be taken in series and take the feed data. For each input, the CNN model is applied, and each input image’s output is transferred to the LSTM classifier as a time step. The number of filter maps for mapping of the various portions of image is the most important hyperparameter used. Transformation on the basis of observations takes place by using Gaussian standardization. CNN-LSTM, a proposed model, is an efficient and lightweight model that has shown high robustness and better activity detection capability than traditional algorithms by providing the accuracy of 97.89%.


2021 ◽  
Author(s):  
Itoro Udofort Koffi

Abstract Accurate knowledge of Pressure-Volume-Temperature (PVT) properties is crucial in reservoir and production engineering computational applications. One of these properties is the oil formation volume factor (Bo), which assumes a significant role in calculating some of the prominent petroleum engineering terms and parameters, such as depletion rate, oil in place, reservoir simulation, material balance equation, well testing, reservoir production calculation, etc. These properties are ideally measured experimentally in the laboratory, based on downhole or recommended surface samples. Faster and cheaper methods are important for real-time decision making and empirically developed correlations are used in the prediction of this property. This work is aimed at developing a more accurate prediction method than the more common methods. The prediction method used is based on a supervised deep neural network to estimate oil formation volume factor at bubble point pressure as a function of gas-oil ratio, gas gravity, specific oil gravity, and reservoir temperature. Deep learning is applied in this paper to address the inaccuracy of empirically derived correlations used for predicting oil formation volume factor. Neural Networks would help us find hidden patterns in the data, which cannot be found otherwise. A multi-layer neural network was used for the prediction via the anaconda programming environment. Two frameworks for modelling data using deep learning viz: TensorFlow and Keras were utilized, and PVT variables selected as input neurons while employing early stopping which uses a part of our data not fed to the model to test its performance to prevent overfitting. In the modelling process, 2994 dataset retrieved from the Niger Delta region was used. The dataset was randomly divided into three parts of which 60% was used for training, 20% for validation, and 20% for testing. The result predicted by the network outperformed existing correlations by the statistical parameters used for the same set of field data. The network has a mean average error of 0.05 which is the lowest when compared to the error generated by other correlation models. The predictive capability of this network is found to be higher than existing models, based on the findings of this work.


Sign in / Sign up

Export Citation Format

Share Document