A Novel Approach to Time Series Forecasting using Deep Learning and Linear Model

2016 ◽  
Vol 136 (3) ◽  
pp. 348-356 ◽  
Author(s):  
Takaomi Hirata ◽  
Takashi Kuremoto ◽  
Masanao Obayashi ◽  
Shingo Mabu ◽  
Kunikazu Kobayashi
2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


Author(s):  
Mohammed Atef ◽  
Ahmed Khattab ◽  
Essam A. Agamy ◽  
Mohamed M. Khairy

Author(s):  
Imran Qureshi ◽  
Burhanuddin Mohammad ◽  
Mohammed Abdul Habeeb ◽  
Mohammed Ali Shaik

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 59311-59320 ◽  
Author(s):  
Mohsen Dorraki ◽  
Anahita Fouladzadeh ◽  
Stephen J. Salamon ◽  
Andrew Allison ◽  
Brendon J. Coventry ◽  
...  

2020 ◽  
Vol 34 (04) ◽  
pp. 6005-6012 ◽  
Author(s):  
Jayaraman J. Thiagarajan ◽  
Bindya Venkatesh ◽  
Prasanna Sattigeri ◽  
Peer-Timo Bremer

With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties. While identifying all sources that account for the stochasticity of models is challenging, it is common to augment predictions with confidence intervals to convey the expected variations in a model's behavior. We require prediction intervals to be well-calibrated, reflect the true uncertainties, and to be sharp. However, existing techniques for obtaining prediction intervals are known to produce unsatisfactory results in at least one of these criteria. To address this challenge, we develop a novel approach for building calibrated estimators. More specifically, we use separate models for prediction and interval estimation, and pose a bi-level optimization problem that allows the former to leverage estimates from the latter through an uncertainty matching strategy. Using experiments in regression, time-series forecasting, and object localization, we show that our approach achieves significant improvements over existing uncertainty quantification methods, both in terms of model fidelity and calibration error.


Sign in / Sign up

Export Citation Format

Share Document