AutoSS: A Deep Learning-Based Soft Sensor for Handling Time-Series Input Data

2021 ◽  
Vol 6 (3) ◽  
pp. 6100-6107
Author(s):  
Nicolo Bargellesi ◽  
Alessandro Beghi ◽  
Mirco Rampazzo ◽  
Gian Antonio Susto
Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Nahla F. Omran ◽  
Sara F. Abd-el Ghany ◽  
Hager Saleh ◽  
Abdelmgeid A. Ali ◽  
Abdu Gumaei ◽  
...  

The novel coronavirus disease (COVID-19) is regarded as one of the most imminent disease outbreaks which threaten public health on various levels worldwide. Because of the unpredictable outbreak nature and the virus’s pandemic intensity, people are experiencing depression, anxiety, and other strain reactions. The response to prevent and control the new coronavirus pneumonia has reached a crucial point. Therefore, it is essential—for safety and prevention purposes—to promptly predict and forecast the virus outbreak in the course of this troublesome time to have control over its mortality. Recently, deep learning models are playing essential roles in handling time-series data in different applications. This paper presents a comparative study of two deep learning methods to forecast the confirmed cases and death cases of COVID-19. Long short-term memory (LSTM) and gated recurrent unit (GRU) have been applied on time-series data in three countries: Egypt, Saudi Arabia, and Kuwait, from 1/5/2020 to 6/12/2020. The results show that LSTM has achieved the best performance in confirmed cases in the three countries, and GRU has achieved the best performance in death cases in Egypt and Kuwait.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3137
Author(s):  
Kevin Fauvel ◽  
Tao Lin ◽  
Véronique Masson ◽  
Élisa Fromont ◽  
Alexandre Termier

Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS classification. XCM is a new compact convolutional neural network which extracts information relative to the observed variables and time directly from the input data. Thus, XCM architecture enables a good generalization ability on both large and small datasets, while allowing the full exploitation of a faithful post hoc model-specific explainability method (Gradient-weighted Class Activation Mapping) by precisely identifying the observed variables and timestamps of the input data that are important for predictions. We first show that XCM outperforms the state-of-the-art MTS classifiers on both the large and small public UEA datasets. Then, we illustrate how XCM reconciles performance and explainability on a synthetic dataset and show that XCM enables a more precise identification of the regions of the input data that are important for predictions compared to the current deep learning MTS classifier also providing faithful explainability. Finally, we present how XCM can outperform the current most accurate state-of-the-art algorithm on a real-world application while enhancing explainability by providing faithful and more informative explanations.


2018 ◽  
Vol 29 (1) ◽  
pp. 941-958 ◽  
Author(s):  
Yun-Cheng Tsai ◽  
Jun-Hao Chen ◽  
Jun-Jie Wang

Abstract Deep learning is an effective approach to solving image recognition problems. People draw intuitive conclusions from trading charts. This study uses the characteristics of deep learning to train computers in imitating this kind of intuition in the context of trading charts. The main goal of our approach is combining the time-series modeling and convolutional neural networks (CNNs) to build a trading model. We propose three steps to build the trading model. First, we preprocess the input data from quantitative data to images. Second, we use a CNN, which is a type of deep learning, to train our trading model. Third, we evaluate the model’s performance in terms of the accuracy of classification. The experimental results show that if the strategy is clear enough to make the images obviously distinguishable the CNN model can predict the prices of a financial asset. Hence, our approach can help devise trading strategies and help clients automatically obtain personalized trading strategies.


Water ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 575
Author(s):  
Zhenghe Li ◽  
Ling Kang ◽  
Liwei Zhou ◽  
Modi Zhu

Recent advances in deep learning, especially the long short-term memory (LSTM) networks, provide some useful insights on how to tackle time series prediction problems, not to mention the development of a time series model itself for prediction. Runoff forecasting is a time series prediction problem with a series of past runoff data (water level and discharge series data) as inputs and a fixed-length series of future runoff as output. Most previous work paid attention to the sufficiency of input data and the structural complexity of deep learning, while less effort has been put into the consideration of data quantity or the processing of original input data—such as time series decomposition, which can better capture the trend of runoff—or unleashing the effective potential of deep learning. Mutual information and seasonal trend decomposition are two useful time series methods in handling data quantity analysis and original data processing. Based on a former study, we proposed a deep learning model combined with time series analysis methods for daily runoff prediction in the middle Yangtze River and analyzed its feasibility and usability with frequently used counterpart models. Furthermore, this research also explored the data quality that affect the performance of the deep learning model. With the application of the time series method, we can effectively get some information about the data quality and data amount that we adopted in the deep learning model. The comparison experiment resulted in two different sites, implying that the proposed model improved the precision of runoff prediction and is much easier and more effective for practical application. In short, time series analysis methods can exert great potential of deep learning in daily runoff prediction and may unleash great potential of artificial intelligence in hydrology research.


2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


2020 ◽  
Vol 16 (6) ◽  
pp. 3721-3730 ◽  
Author(s):  
Xiaofeng Yuan ◽  
Jiao Zhou ◽  
Biao Huang ◽  
Yalin Wang ◽  
Chunhua Yang ◽  
...  

Open Physics ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 360-374
Author(s):  
Yuan Pei ◽  
Lei Zhenglin ◽  
Zeng Qinghui ◽  
Wu Yixiao ◽  
Lu Yanli ◽  
...  

Abstract The load of the showcase is a nonlinear and unstable time series data, and the traditional forecasting method is not applicable. Deep learning algorithms are introduced to predict the load of the showcase. Based on the CEEMD–IPSO–LSTM combination algorithm, this paper builds a refrigerated display cabinet load forecasting model. Compared with the forecast results of other models, it finally proves that the CEEMD–IPSO–LSTM model has the highest load forecasting accuracy, and the model’s determination coefficient is 0.9105, which is obviously excellent. Compared with other models, the model constructed in this paper can predict the load of showcases, which can provide a reference for energy saving and consumption reduction of display cabinet.


Author(s):  
Andreas Kanavos ◽  
Fotios Kounelis ◽  
Lazaros Iliadis ◽  
Christos Makris

Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1151
Author(s):  
Carolina Gijón ◽  
Matías Toril ◽  
Salvador Luna-Ramírez ◽  
María Luisa Marí-Altozano ◽  
José María Ruiz-Avilés

Network dimensioning is a critical task in current mobile networks, as any failure in this process leads to degraded user experience or unnecessary upgrades of network resources. For this purpose, radio planning tools often predict monthly busy-hour data traffic to detect capacity bottlenecks in advance. Supervised Learning (SL) arises as a promising solution to improve predictions obtained with legacy approaches. Previous works have shown that deep learning outperforms classical time series analysis when predicting data traffic in cellular networks in the short term (seconds/minutes) and medium term (hours/days) from long historical data series. However, long-term forecasting (several months horizon) performed in radio planning tools relies on short and noisy time series, thus requiring a separate analysis. In this work, we present the first study comparing SL and time series analysis approaches to predict monthly busy-hour data traffic on a cell basis in a live LTE network. To this end, an extensive dataset is collected, comprising data traffic per cell for a whole country during 30 months. The considered methods include Random Forest, different Neural Networks, Support Vector Regression, Seasonal Auto Regressive Integrated Moving Average and Additive Holt–Winters. Results show that SL models outperform time series approaches, while reducing data storage capacity requirements. More importantly, unlike in short-term and medium-term traffic forecasting, non-deep SL approaches are competitive with deep learning while being more computationally efficient.


Sign in / Sign up

Export Citation Format

Share Document