scholarly journals An approach towards data change rule based classification of driving maneuver with LSTM network

Author(s):  
Supriya Sarker ◽  
Md Mokammel Haque

The proposed work develops a Long Short Term Memory (LSTM) model for multi class classification of driving maneuver from sensor fusion time series dataset. The work also analyzes the significance of sensor fusion data change rule and utilized the idea with deep learning time series multi class classification of driving maneuver. We also proposed some hypotheses which are proven by the experimental results. The proposed model provides Train Accuracy: 99.98, Test Accuracy: 97.2021, Precision: 0.974848, Recall: 0.960154 and F1 score: 0.967028. The Mean Per Class Error (MPCE) is 0.01386. The significant rules can accelerate the feature extraction process of driving data. Moreover, it helps in automatic labeling of unlabeled dataset. Our future approach is to develop a tool for generating categorical label for unlabeled dataset. Besides, we have plan to optimize the proposed classifier using grid search. <br>

2020 ◽  
Author(s):  
Supriya Sarker ◽  
Md Mokammel Haque

The proposed work develops a Long Short Term Memory (LSTM) model for multi class classification of driving maneuver from sensor fusion time series dataset. The work also analyzes the significance of sensor fusion data change rule and utilized the idea with deep learning time series multi class classification of driving maneuver. We also proposed some hypotheses which are proven by the experimental results. The proposed model provides Train Accuracy: 99.98, Test Accuracy: 97.2021, Precision: 0.974848, Recall: 0.960154 and F1 score: 0.967028. The Mean Per Class Error (MPCE) is 0.01386. The significant rules can accelerate the feature extraction process of driving data. Moreover, it helps in automatic labeling of unlabeled dataset. Our future approach is to develop a tool for generating categorical label for unlabeled dataset. Besides, we have plan to optimize the proposed classifier using grid search. <br>


Author(s):  
Saksham Bassi ◽  
Kaushal Sharma ◽  
Atharva Gomekar

Owing to the current and upcoming extensive surveys studying the stellar variability, accurate and quicker methods are required for the astronomers to automate the classification of variable stars. The traditional approach of classification requires the calculation of the period of the observed light curve and assigning different variability patterns of phase folded light curves to different classes. However, applying these methods becomes difficult if the light curves are sparse or contain temporal gaps. Also, period finding algorithms start slowing down and become redundant in such scenarios. In this work, we present a new automated method, 1D CNN-LSTM, for classifying variable stars using a hybrid neural network of one-dimensional CNN and LSTM network which employs the raw time-series data from the variable stars. We apply the network to classify the time-series data obtained from the OGLE and the CRTS survey. We report the best average accuracy of 85% and F1 score of 0.71 for classifying five classes from the OGLE survey. We simultaneously apply other existing classification methods to our dataset and compare the results.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tuan D. Pham

AbstractAutomated analysis of physiological time series is utilized for many clinical applications in medicine and life sciences. Long short-term memory (LSTM) is a deep recurrent neural network architecture used for classification of time-series data. Here time–frequency and time–space properties of time series are introduced as a robust tool for LSTM processing of long sequential data in physiology. Based on classification results obtained from two databases of sensor-induced physiological signals, the proposed approach has the potential for (1) achieving very high classification accuracy, (2) saving tremendous time for data learning, and (3) being cost-effective and user-comfortable for clinical trials by reducing multiple wearable sensors for data recording.


Author(s):  
Nachiketa Chakraborty

With an explosion of data in the near future, from observatories spanning from radio to gamma-rays, we have entered the era of time domain astronomy. Historically, this field has been limited to modeling the temporal structure with time-series simulations limited to energy ranges blessed with excellent statistics as in X-rays. In addition to ever increasing volumes and variety of astronomical lightcurves, there's a plethora of different types of transients detected not only across the electromagnetic spectrum, but indeed across multiple messengers like counterparts for neutrino and gravitational wave sources. As a result, precise, fast forecasting and modeling the lightcurves or time-series will play a crucial role in both understanding the physical processes as well as coordinating multiwavelength and multimessenger campaigns. In this regard, deep learning algorithms such as recurrent neural networks (RNNs) should prove extremely powerful for forecasting as it has in several other domains. Here we test the performance of a very successful class of RNNs, the Long Short Term Memory (LSTM) algorithms with simulated lightcurves. We focus on univariate forecasting of types of lightcurves typically found in active galactic nuclei (AGN) observations. Specifically, we explore the sensitivity of training and test losses to key parameters of the LSTM network and data characteristics namely gaps and complexity measured in terms of number of Fourier components. We find that typically, the performances of LSTMs are better for pink or flicker noise type sources. The key parameters on which performance is dependent are batch size for LSTM and the gap percentage of the lightcurves. While a batch size of $10-30$ seems optimal, the most optimal test and train losses are under $10 \%$ of missing data for both periodic and random gaps in pink noise. The performance is far worse for red noise. This compromises detectability of transients. The performance gets monotonically worse for data complexity measured in terms of number of Fourier components which is especially relevant in the context of complicated quasi-periodic signals buried under noise. Thus, we show that time-series simulations are excellent guides for use of RNN-LSTMs in forecasting.


2019 ◽  
Vol 9 (12) ◽  
pp. 348 ◽  
Author(s):  
Ji-Hoon Jeong ◽  
Baek-Woon Yu ◽  
Dae-Hyeok Lee ◽  
Seong-Whan Lee

Non-invasive brain-computer interfaces (BCI) have been developed for recognizing human mental states with high accuracy and for decoding various types of mental conditions. In particular, accurately decoding a pilot’s mental state is a critical issue as more than 70% of aviation accidents are caused by human factors, such as fatigue or drowsiness. In this study, we report the classification of not only two mental states (i.e., alert and drowsy states) but also five drowsiness levels from electroencephalogram (EEG) signals. To the best of our knowledge, this approach is the first to classify drowsiness levels in detail using only EEG signals. We acquired EEG data from ten pilots in a simulated night flight environment. For accurate detection, we proposed a deep spatio-temporal convolutional bidirectional long short-term memory network (DSTCLN) model. We evaluated the classification performance using Karolinska sleepiness scale (KSS) values for two mental states and five drowsiness levels. The grand-averaged classification accuracies were 0.87 (±0.01) and 0.69 (±0.02), respectively. Hence, we demonstrated the feasibility of classifying five drowsiness levels with high accuracy using deep learning.


Author(s):  
Sawsan Morkos Gharghory

An enhanced architecture of recurrent neural network based on Long Short-Term Memory (LSTM) is suggested in this paper for predicting the microclimate inside the greenhouse through its time series data. The microclimate inside the greenhouse largely affected by the external weather variations and it has a great impact on the greenhouse crops and its production. Therefore, it is a massive importance to predict the microclimate inside greenhouse as a preceding stage for accurate design of a control system that could fulfill the requirements of suitable environment for the plants and crop managing. The LSTM network is trained and tested by the temperatures and relative humidity data measured inside the greenhouse utilizing the mathematical greenhouse model with the outside weather data over 27 days. To evaluate the prediction accuracy of the suggested LSTM network, different measurements, such as Root Mean Square Error (RMSE) and Mean Absolute Error (MAE), are calculated and compared to those of conventional networks in references. The simulation results of LSTM network for forecasting the temperature and relative humidity inside greenhouse outperform over those of the traditional methods. The prediction results of temperature and humidity inside greenhouse in terms of RMSE approximately are 0.16 and 0.62 and in terms of MAE are 0.11 and 0.4, respectively, for both of them.


2020 ◽  
Vol 6 (7) ◽  
pp. 68
Author(s):  
Merve Bozo ◽  
Erchan Aptoula ◽  
Zehra Çataltepe

In this article, we propose an end-to-end deep network for the classification of multi-spectral time series and apply them to crop type mapping. Long short-term memory networks (LSTMs) are well established in this regard, thanks to their capacity to capture both long and short term temporal dependencies. Nevertheless, dealing with high intra-class variance and inter-class similarity still remain significant challenges. To address these issues, we propose a straightforward approach where LSTMs are combined with metric learning. The proposed architecture accommodates three distinct branches with shared weights, each containing a LSTM module, that are merged through a triplet loss. It thus not only minimizes classification error, but enforces the sub-networks to produce more discriminative deep features. It is validated via Breizhcrops, a very recently introduced and challenging time series dataset for crop type mapping.


2020 ◽  
Vol 77 (4) ◽  
pp. 1379-1390 ◽  
Author(s):  
Roland Proud ◽  
Richard Mangeni-Sande ◽  
Robert J Kayanda ◽  
Martin J Cox ◽  
Chrisphine Nyamweya ◽  
...  

Abstract Biomass of the schooling fish Rastrineobola argentea (dagaa) is presently estimated in Lake Victoria by acoustic survey following the simple “rule” that dagaa is the source of most echo energy returned from the top third of the water column. Dagaa have, however, been caught in the bottom two-thirds, and other species occur towards the surface: a more robust discrimination technique is required. We explored the utility of a school-based random forest (RF) classifier applied to 120 kHz data from a lake-wide survey. Dagaa schools were first identified manually using expert opinion informed by fishing. These schools contained a lake-wide biomass of 0.68 million tonnes (MT). Only 43.4% of identified dagaa schools occurred in the top third of the water column, and 37.3% of all schools in the bottom two-thirds were classified as dagaa. School metrics (e.g. length, echo energy) for 49 081 manually classified dagaa and non-dagaa schools were used to build an RF school classifier. The best RF model had a classification test accuracy of 85.4%, driven largely by school length, and yielded a biomass of 0.71 MT, only c. 4% different from the manual estimate. The RF classifier offers an efficient method to generate a consistent dagaa biomass time series.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 517 ◽  
Author(s):  
Ali M. Hasan ◽  
Mohammed M. AL-Jawad ◽  
Hamid A. Jalab ◽  
Hadil Shaiba ◽  
Rabha W. Ibrahim ◽  
...  

Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.


2021 ◽  
Vol 14 (4) ◽  
pp. 2408-2418 ◽  
Author(s):  
Tonny I. Okedi ◽  
Adrian C. Fisher

LSTM networks are shown to predict the seasonal component of biophotovoltaic current density and photoresponse to high accuracy.


Sign in / Sign up

Export Citation Format

Share Document