scholarly journals Testing RNN-LSTM Forecasting with Simulated Astronomical Lightcurves

Author(s):  
Nachiketa Chakraborty

With an explosion of data in the near future, from observatories spanning from radio to gamma-rays, we have entered the era of time domain astronomy. Historically, this field has been limited to modeling the temporal structure with time-series simulations limited to energy ranges blessed with excellent statistics as in X-rays. In addition to ever increasing volumes and variety of astronomical lightcurves, there's a plethora of different types of transients detected not only across the electromagnetic spectrum, but indeed across multiple messengers like counterparts for neutrino and gravitational wave sources. As a result, precise, fast forecasting and modeling the lightcurves or time-series will play a crucial role in both understanding the physical processes as well as coordinating multiwavelength and multimessenger campaigns. In this regard, deep learning algorithms such as recurrent neural networks (RNNs) should prove extremely powerful for forecasting as it has in several other domains. Here we test the performance of a very successful class of RNNs, the Long Short Term Memory (LSTM) algorithms with simulated lightcurves. We focus on univariate forecasting of types of lightcurves typically found in active galactic nuclei (AGN) observations. Specifically, we explore the sensitivity of training and test losses to key parameters of the LSTM network and data characteristics namely gaps and complexity measured in terms of number of Fourier components. We find that typically, the performances of LSTMs are better for pink or flicker noise type sources. The key parameters on which performance is dependent are batch size for LSTM and the gap percentage of the lightcurves. While a batch size of $10-30$ seems optimal, the most optimal test and train losses are under $10 \%$ of missing data for both periodic and random gaps in pink noise. The performance is far worse for red noise. This compromises detectability of transients. The performance gets monotonically worse for data complexity measured in terms of number of Fourier components which is especially relevant in the context of complicated quasi-periodic signals buried under noise. Thus, we show that time-series simulations are excellent guides for use of RNN-LSTMs in forecasting.

Author(s):  
Sawsan Morkos Gharghory

An enhanced architecture of recurrent neural network based on Long Short-Term Memory (LSTM) is suggested in this paper for predicting the microclimate inside the greenhouse through its time series data. The microclimate inside the greenhouse largely affected by the external weather variations and it has a great impact on the greenhouse crops and its production. Therefore, it is a massive importance to predict the microclimate inside greenhouse as a preceding stage for accurate design of a control system that could fulfill the requirements of suitable environment for the plants and crop managing. The LSTM network is trained and tested by the temperatures and relative humidity data measured inside the greenhouse utilizing the mathematical greenhouse model with the outside weather data over 27 days. To evaluate the prediction accuracy of the suggested LSTM network, different measurements, such as Root Mean Square Error (RMSE) and Mean Absolute Error (MAE), are calculated and compared to those of conventional networks in references. The simulation results of LSTM network for forecasting the temperature and relative humidity inside greenhouse outperform over those of the traditional methods. The prediction results of temperature and humidity inside greenhouse in terms of RMSE approximately are 0.16 and 0.62 and in terms of MAE are 0.11 and 0.4, respectively, for both of them.


2020 ◽  
Author(s):  
Supriya Sarker ◽  
Md Mokammel Haque

The proposed work develops a Long Short Term Memory (LSTM) model for multi class classification of driving maneuver from sensor fusion time series dataset. The work also analyzes the significance of sensor fusion data change rule and utilized the idea with deep learning time series multi class classification of driving maneuver. We also proposed some hypotheses which are proven by the experimental results. The proposed model provides Train Accuracy: 99.98, Test Accuracy: 97.2021, Precision: 0.974848, Recall: 0.960154 and F1 score: 0.967028. The Mean Per Class Error (MPCE) is 0.01386. The significant rules can accelerate the feature extraction process of driving data. Moreover, it helps in automatic labeling of unlabeled dataset. Our future approach is to develop a tool for generating categorical label for unlabeled dataset. Besides, we have plan to optimize the proposed classifier using grid search. <br>


2021 ◽  
Vol 14 (4) ◽  
pp. 2408-2418 ◽  
Author(s):  
Tonny I. Okedi ◽  
Adrian C. Fisher

LSTM networks are shown to predict the seasonal component of biophotovoltaic current density and photoresponse to high accuracy.


Author(s):  
Ahmed Ben Said ◽  
Abdelkarim Erradi ◽  
Hussein Ahmed Aly ◽  
Abdelmonem Mohamed

AbstractTo assist policymakers in making adequate decisions to stop the spread of the COVID-19 pandemic, accurate forecasting of the disease propagation is of paramount importance. This paper presents a deep learning approach to forecast the cumulative number of COVID-19 cases using bidirectional Long Short-Term Memory (Bi-LSTM) network applied to multivariate time series. Unlike other forecasting techniques, our proposed approach first groups the countries having similar demographic and socioeconomic aspects and health sector indicators using K-means clustering algorithm. The cumulative case data of the clustered countries enriched with data related to the lockdown measures are fed to the bidirectional LSTM to train the forecasting model. We validate the effectiveness of the proposed approach by studying the disease outbreak in Qatar and the proposed model prediction from December 1st until December 31st, 2020. The quantitative evaluation shows that the proposed technique outperforms state-of-art forecasting approaches.


Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4612 ◽  
Author(s):  
Pangun Park ◽  
Piergiuseppe Di Marco ◽  
Hyejeon Shin ◽  
Junseong Bang

Fault detection and diagnosis is one of the most critical components of preventing accidents and ensuring the system safety of industrial processes. In this paper, we propose an integrated learning approach for jointly achieving fault detection and fault diagnosis of rare events in multivariate time series data. The proposed approach combines an autoencoder to detect a rare fault event and a long short-term memory (LSTM) network to classify different types of faults. The autoencoder is trained with offline normal data, which is then used as the anomaly detection. The predicted faulty data, captured by autoencoder, are put into the LSTM network to identify the types of faults. It basically combines the strong low-dimensional nonlinear representations of the autoencoder for the rare event detection and the strong time series learning ability of LSTM for the fault diagnosis. The proposed approach is compared with a deep convolutional neural network approach for fault detection and identification on the Tennessee Eastman process. Experimental results show that the combined approach accurately detects deviations from normal behaviour and identifies the types of faults within the useful time.


2021 ◽  
Author(s):  
Praveen Verma ◽  
Chetan Singh Negi ◽  
Maneesh Panth ◽  
Anuj Saxena

Abstract Covid-19 a small virus has created a havoc in the world. The pandemic has already taken over 4 lakh lives. The tests to detect a Covid-19 positive takes time and is costly. Moreover, the ability of the virus to mutate surprises the doctors every day. Present paper proposes a saliency-based model called Deep_Saliency. The model works on chest x-rays of healthy, unhealthy, and covid-19 patients. An x-ray repository of Covid-19, available in public domain, is taken for the study. Deep_Saliency uses visual, disparity, and motion saliency to create a feature dataset of the x-rays. The collected features are tested and trained using Long Short-Term Memory (LSTM) network. A predictive analysis is performed using the x-ray of a new patient to confirm a Covid-19 positive case. The first objective of the paper is to detect Covid-19 positive cases from x-rays. The other objective is to provide a benchmark dataset of biomarkers. The proposed work achieved an accuracy of 96.66%.


2020 ◽  
Vol 494 (3) ◽  
pp. 3432-3448
Author(s):  
Arti Goyal

ABSTRACT We present the results of the power spectral density (PSD) analysis for the blazars Mrk 421 and PKS 2155−304, using good-quality, densely sampled light curves at multiple frequencies, covering 17 decades of the electromagnetic spectrum, and variability time-scales from weeks up to a decade. The data were collected from publicly available archives of observatories at radio from Owens Valley Radio Observatory, optical and infrared (B, V, R, I, J, H, and Kbands), X-rays from the Swift and the Rossi X-ray Timing Explorer, high and very high energy (VHE) γ-rays from the Fermi and Very Energetic Radiation Imaging Telescope Array System as well as the High Energy Stereoscopic System. Our results are: (1) the power-law form of the variability power spectra at radio, infrared, and optical frequencies have slopes ∼1.8, indicative of random-walk-type noise processes; (2) the power-law form of the variability power spectra at higher frequencies, from X-rays to VHE  γ-rays, however, have slopes ∼1.2, suggesting a flicker noise-type process; and (3) there is significantly more variability power at X-rays, high and VHE γ-rays on time-scales ≲ 100 d, as compared to lower energies. Our results do not easily fit into a simple model, in which a single compact emission zone is dominating the radiative output of the blazars across all the time-scales probed in our analysis. Instead, we argue that the frequency-dependent shape of the variability power spectra points out a more complex picture, with highly inhomogeneous outflow producing non-thermal emission over an extended, stratified volume.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 861
Author(s):  
Xijie Xu ◽  
Xiaoping Rui ◽  
Yonglei Fan ◽  
Tian Yu ◽  
Yiwen Ju

Accurately forecasting the daily production of coalbed methane (CBM) is important forformulating associated drainage parameters and evaluating the economic benefit of CBM mining. Daily production of CBM depends on many factors, making it difficult to predict using conventional mathematical models. Because traditional methods do not reflect the long-term time series characteristics of CBM production, this study first used a long short-term memory neural network (LSTM) and transfer learning (TL) method for time series forecasting of CBM daily production. Based on the LSTM model, we introduced the idea of transfer learning and proposed a Transfer-LSTM (T-LSTM) CBM production forecasting model. This approach first uses a large amount of data similar to the target to pretrain the weights of the LSTM network, then uses transfer learning to fine-tune LSTM network parameters a second time, so as to obtain the final T-LSTM model. Experiments were carried out using daily CBM production data for the Panhe Demonstration Zone at southern Qinshui basin in China. Based on the results, the idea of transfer learning can solve the problem of insufficient samples during LSTM training. Prediction results for wells that entered the stable period earlier were more accurate, whereas results for types with unstable production in the early stage require further exploration. Because CBM wells daily production data have symmetrical similarities, which can provide a reference for the prediction of other wells, so our proposed T-LSTM network can achieve good results for the production forecast and can provide guidance for forecasting production of CBM wells.


2020 ◽  
Author(s):  
Praveen Verma ◽  
Chetan Singh Negi ◽  
Maneesh Pant ◽  
Anuj Saxena

Abstract Covid-19 a small virus has created a havoc in the world. The pandemic has already taken over 4 lakh lives. The tests to detect a Covid-19 positive takes time and is costly. Moreover, the ability of the virus to mutate surprises the doctors every day. Present paper proposes a saliency-based model called Deep_Saliency. The model works on chest x-rays of healthy, unhealthy, and covid-19 patients. An x-ray repository of Covid-19, available in public domain, is taken for the study. Deep_Saliency uses visual, disparity, and motion saliency to create a feature dataset of the x-rays. The collected features are tested and trained using Long Short-Term Memory (LSTM) network. A predictive analysis is performed using the x-ray of a new patient to confirm a Covid-19 positive case. The first objective of the paper is to detect Covid-19 positive cases from x-rays. The other objective is to provide a benchmark dataset of biomarkers. The proposed work achieved an accuracy of 96.66%.


Sign in / Sign up

Export Citation Format

Share Document