scholarly journals Fall detection system based on BiLSTM neural network

2019 ◽  
Vol 7 (5) ◽  
pp. 01-12
Author(s):  
Biao YE ◽  
Lasheng Yu

The purpose of this article is to analyze the characteristics of human fall behavior to design a fall detection system. The existing fall detection algorithms have problems such as poor adaptability, single function and difficulty in processing large data and strong randomness. Therefore, a long-term and short-term memory recurrent neural network is used to improve the effect of falling behavior detection by exploring the internal correlation between sensor data. Firstly, the serialization representation method of sensor data, training data and detection input data is designed. The BiLSTM network has the characteristics of strong ability to sequence modeling and it is used to reduce the dimension of the data required by the fall detection model. then, the BiLSTM training algorithm for fall detection and the BiLSTM-based fall detection algorithm convert the fall detection into the classification problem of the input sequence; finally, the BiLSTM-based fall detection system was implemented on the TensorFlow platform. The detection and analysis of system were carried out using a bionic experiment data set which mimics a fall. The experimental results verify that the system can effectively improve the accuracy of fall detection to 90.47%. At the same time, it can effectively detect the behavior of Near-falling, and help to take corresponding protective measures.

2021 ◽  
pp. 1-12
Author(s):  
Qian Wang ◽  
Wenfang Zhao ◽  
Jiadong Ren

Intrusion Detection System (IDS) can reduce the losses caused by intrusion behaviors and protect users’ information security. The effectiveness of IDS depends on the performance of the algorithm used in identifying intrusions. And traditional machine learning algorithms are limited to deal with the intrusion data with the characteristics of high-dimensionality, nonlinearity and imbalance. Therefore, this paper proposes an Intrusion Detection algorithm based on Image Enhanced Convolutional Neural Network (ID-IE-CNN). Firstly, based on the image processing technology of deep learning, oversampling method is used to increase the amount of original data to achieve data balance. Secondly, the one-dimensional data is converted into two-dimensional image data, the convolutional layer and the pooling layer are used to extract the main features of the image to reduce the data dimensionality. Third, the Tanh function is introduced as an activation function to fit nonlinear data, a fully connected layer is used to integrate local information, and the generalization ability of the prediction model is improved by the Dropout method. Finally, the Softmax classifier is used to predict the behavior of intrusion detection. This paper uses the KDDCup99 data set and compares with other competitive algorithms. Both in the performance of binary classification and multi-classification, ID-IE-CNN is better than the compared algorithms, which verifies its superiority.


Webology ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 509-518
Author(s):  
Payman Hussein Hussan ◽  
Syefy Mohammed Mangj Al-Razoky ◽  
Hasanain Mohammed Manji Al-Rzoky

This paper presents an efficient method for finding fractures in bones. For this purpose, the pre-processing set includes increasing the quality of images, removing additional objects, removing noise and rotating images. The input images then enter the machine learning phase to detect the final fracture. At this stage, a Convolutional Neural Networks is created by Genetic Programming (GP). In this way, learning models are implemented in the form of GP programs. And evolve during the evolution of this program. Then finally the best program for classifying incoming images is selected. The data set in this work is divided into training and test friends who have nothing in common. The ratio of training data to test is equal to 80 to 20. Finally, experimental results show good results for the proposed method for bone fractures.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2481
Author(s):  
Ngoc-Bao-Van Le ◽  
Jun-Ho Huh

Product reviews become more important in the buying decision-making process of customers. Exploiting and analyzing customer product reviews in sentiments also become an advantage for businesses and researchers in e-commerce platforms. This study proposes a sentiment evaluation model of customer reviews by extracting objects, emotional words for emotional level analysis, using machine learning algorithms. The research object is the Vietnamese language, which has special semantic structures and characteristics. In this research model, emotional dictionaries and sets of extract rules are combined to build a data training data set based on the semantic dependency relationship between words in sentences of the given Vietnamese context. The recurrent neural network model (RNN) solves the emotional analysis issue, specifically, the long short-term memory neural network (LSTMs). This analysis model combines the vector representations of words with a continuous bag-of-words (CBOW) architecture. Our system is designed to crawl realistic data in an e-commerce website and automatically aggregate them. These data will be stored in MongoDB before processing and input into our model on the server. Then, the system can exploit the features in products reviews and classify customer reviews. These features extracted from different feedback on each shopping step and depending on the kinds of products. Finally, there is a web-app to connect to a server and visualize all the research results. Based on the research results, enterprises can follow up their customers in real-time and receive recommendations to understand their customers. From there, they can improve their services and provide sustainable consumer service.


CONVERTER ◽  
2021 ◽  
pp. 64-73
Author(s):  
Yang Dong

To improve intrusion detection system performance,many algorithms are used to improve the performance of IDS systems, especially deep learning models. This paper presents an algorithm based on the model MLP, the training data set is the KDD99 data set, and the original data of the data set is vectorized by one-hot encoding, and the feature data is processed by Z-Score, and then the feature vector is encoded, and then the multi-layer perception is used The machine network performs feature learning, and finally trains the classifier model for detection. Traditional network anomaly detection algorithm models mainly use manual selection methods, and the accuracy and efficiency of classification problems are not high. This article first proposed the role of multilayer perceptron in Adam optimizer. The test of the KDD99 data set has been completed. The algorithm accuracy rate can reach 99%. For future network abnormal data detection work, an algorithm model that can realize real-time online detection is provided, which will have higher accuracy and better real-time performance.


2019 ◽  
Vol 11 (11) ◽  
pp. 243 ◽  
Author(s):  
Wenjie Zhang ◽  
Pin Wu ◽  
Yan Peng ◽  
Dongke Liu

The prediction of roll motion in unmanned surface vehicles (USVs) is vital for marine safety and the efficiency of USV operations. However, the USV roll motion at sea is a complex time-varying nonlinear and non-stationary dynamic system, which varies with time-varying environmental disturbances as well as various sailing conditions. The conventional methods have the disadvantages of low accuracy, poor robustness, and insufficient practical application ability. The rise of deep learning provides new opportunities for USV motion modeling and prediction. In this paper, a data-driven neural network model is constructed by combining a convolution neural network (CNN) with long short-term memory (LSTM) for USV roll motion prediction. The CNN is used to extract spatially relevant and local time series features of the USV sensor data. The LSTM layer is exploited to reflect the long-term movement process of the USV and predict roll motion for the next moment. The fully connected layer is utilized to decode the LSTM output and calculate the final prediction results. The effectiveness of the proposed model was proved using USV roll motion prediction experiments based on two case studies from “JingHai-VI” and “JingHai-III” USVS of Shanghai University. Experimental results on a real data set indicated that our proposed model obviously outperformed the state-of-the-art methods.


Author(s):  
R. Zahn ◽  
C. Breitsamter

AbstractIn the present study, a nonlinear system identification approach based on a long short-term memory (LSTM) neural network is applied for the prediction of transonic buffet aerodynamics. The identification approach is applied as a reduced-order modeling (ROM) technique for an efficient computation of time-varying integral quantities such as aerodynamic force and moment coefficients. Therefore, the nonlinear identification procedure as well as the generalization of the ROM are presented. The training data set for the LSTM–ROM is provided by performing forced-motion unsteady Reynolds-averaged Navier–Stokes simulations. Subsequent to the training process, the ROM is applied for the computation of the aerodynamic integral quantities associated with transonic buffet. The performance of the trained ROM is demonstrated by computing the aerodynamic loads of the NACA0012 airfoil investigated at transonic freestream conditions. In contrast to previous studies considering only a pitching excitation, both the pitch and plunge degrees of freedom of the airfoil are individually and simultaneously excited by means of an user-defined training signal. Therefore, strong nonlinear effects are considered for the training of the ROM. By comparing the results with a full-order computational fluid dynamics solution, a good prediction capability of the presented ROM method is indicated. However, compared to the results of previous studies including only the airfoil pitching excitation, a slightly reduced prediction performance is shown.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 48-70
Author(s):  
Wei Ming Tan ◽  
T. Hui Teo

Prognostic techniques attempt to predict the Remaining Useful Life (RUL) of a subsystem or a component. Such techniques often use sensor data which are periodically measured and recorded into a time series data set. Such multivariate data sets form complex and non-linear inter-dependencies through recorded time steps and between sensors. Many current existing algorithms for prognostic purposes starts to explore Deep Neural Network (DNN) and its effectiveness in the field. Although Deep Learning (DL) techniques outperform the traditional prognostic algorithms, the networks are generally complex to deploy or train. This paper proposes a Multi-variable Time Series (MTS) focused approach to prognostics that implements a lightweight Convolutional Neural Network (CNN) with attention mechanism. The convolution filters work to extract the abstract temporal patterns from the multiple time series, while the attention mechanisms review the information across the time axis and select the relevant information. The results suggest that the proposed method not only produces a superior accuracy of RUL estimation but it also trains many folds faster than the reported works. The superiority of deploying the network is also demonstrated on a lightweight hardware platform by not just being much compact, but also more efficient for the resource restricted environment.


Author(s):  
Yanxiang Yu ◽  
◽  
Chicheng Xu ◽  
Siddharth Misra ◽  
Weichang Li ◽  
...  

Compressional and shear sonic traveltime logs (DTC and DTS, respectively) are crucial for subsurface characterization and seismic-well tie. However, these two logs are often missing or incomplete in many oil and gas wells. Therefore, many petrophysical and geophysical workflows include sonic log synthetization or pseudo-log generation based on multivariate regression or rock physics relations. Started on March 1, 2020, and concluded on May 7, 2020, the SPWLA PDDA SIG hosted a contest aiming to predict the DTC and DTS logs from seven “easy-to-acquire” conventional logs using machine-learning methods (GitHub, 2020). In the contest, a total number of 20,525 data points with half-foot resolution from three wells was collected to train regression models using machine-learning techniques. Each data point had seven features, consisting of the conventional “easy-to-acquire” logs: caliper, neutron porosity, gamma ray (GR), deep resistivity, medium resistivity, photoelectric factor, and bulk density, respectively, as well as two sonic logs (DTC and DTS) as the target. The separate data set of 11,089 samples from a fourth well was then used as the blind test data set. The prediction performance of the model was evaluated using root mean square error (RMSE) as the metric, shown in the equation below: RMSE=sqrt(1/2*1/m* [∑_(i=1)^m▒〖(〖DTC〗_pred^i-〖DTC〗_true^i)〗^2 + 〖(〖DTS〗_pred^i-〖DTS〗_true^i)〗^2 ] In the benchmark model, (Yu et al., 2020), we used a Random Forest regressor and conducted minimal preprocessing to the training data set; an RMSE score of 17.93 was achieved on the test data set. The top five models from the contest, on average, beat the performance of our benchmark model by 27% in the RMSE score. In the paper, we will review these five solutions, including preprocess techniques and different machine-learning models, including neural network, long short-term memory (LSTM), and ensemble trees. We found that data cleaning and clustering were critical for improving the performance in all models.


Author(s):  
M. Takadoya ◽  
M. Notake ◽  
M. Kitahara ◽  
J. D. Achenbach ◽  
Q. C. Guo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document