Air Pollution Matter Prediction Using Recurrent Neural Networks with Sequential Data

Author(s):  
Yong Beom Lim ◽  
Ibrahim Aliyu ◽  
Chang Gyoon Lim
Information ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 388
Author(s):  
Baocheng Wang ◽  
Wentao Cai

With the rapid increase in the popularity of big data and internet technology, sequential recommendation has become an important method to help people find items they are potentially interested in. Traditional recommendation methods use only recurrent neural networks (RNNs) to process sequential data. Although effective, the results may be unable to capture both the semantic-based preference and the complex transitions between items adequately. In this paper, we model separated session sequences into session graphs and capture complex transitions using graph neural networks (GNNs). We further link items in interaction sequences with existing external knowledge base (KB) entities and integrate the GNN-based recommender with key-value memory networks (KV-MNs) to incorporate KB knowledge. Specifically, we set a key matrix to many relation embeddings that learned from KB, corresponding to many entity attributes, and set up a set of value matrices storing the semantic-based preferences of different users for the corresponding attribute. By using a hybrid of a GNN and KV-MN, each session is represented as the combination of the current interest (i.e., sequential preference) and the global preference (i.e., semantic-based preference) of that session. Extensive experiments on three public real-world datasets show that our method performs better than baseline algorithms consistently.


2019 ◽  
Vol 31 (7) ◽  
pp. 1235-1270 ◽  
Author(s):  
Yong Yu ◽  
Xiaosheng Si ◽  
Changhua Hu ◽  
Jianxun Zhang

Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the cell structure, the long short-term memory (LSTM) could handle the problem of long-term dependencies well. Since its introduction, almost all the exciting results based on RNNs have been achieved by the LSTM. The LSTM has become the focus of deep learning. We review the LSTM cell and its variants to explore the learning capacity of the LSTM cell. Furthermore, the LSTM networks are divided into two broad categories: LSTM-dominated networks and integrated LSTM networks. In addition, their various applications are discussed. Finally, future research directions are presented for LSTM networks.


Author(s):  
Yu Pan ◽  
Jing Xu ◽  
Maolin Wang ◽  
Jinmian Ye ◽  
Fei Wang ◽  
...  

Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved promising performance in sequential data modeling. The hidden layers in RNNs can be regarded as the memory units, which are helpful in storing information in sequential contexts. However, when dealing with high dimensional input data, such as video and text, the input-to-hidden linear transformation in RNNs brings high memory usage and huge computational cost. This makes the training of RNNs very difficult. To address this challenge, we propose a novel compact LSTM model, named as TR-LSTM, by utilizing the low-rank tensor ring decomposition (TRD) to reformulate the input-to-hidden transformation. Compared with other tensor decomposition methods, TR-LSTM is more stable. In addition, TR-LSTM can complete an end-to-end training and also provide a fundamental building block for RNNs in handling large input data. Experiments on real-world action recognition datasets have demonstrated the promising performance of the proposed TR-LSTM compared with the tensor-train LSTM and other state-of-the-art competitors.


2020 ◽  
Vol 34 (04) ◽  
pp. 5101-5108
Author(s):  
Xiao Ma ◽  
Peter Karkus ◽  
David Hsu ◽  
Wee Sun Lee

Recurrent neural networks (RNNs) have been extraordinarily successful for prediction with sequential data. To tackle highly variable and multi-modal real-world data, we introduce Particle Filter Recurrent Neural Networks (PF-RNNs), a new RNN family that explicitly models uncertainty in its internal structure: while an RNN relies on a long, deterministic latent state vector, a PF-RNN maintains a latent state distribution, approximated as a set of particles. For effective learning, we provide a fully differentiable particle filter algorithm that updates the PF-RNN latent state distribution according to the Bayes rule. Experiments demonstrate that the proposed PF-RNNs outperform the corresponding standard gated RNNs on a synthetic robot localization dataset and 10 real-world sequence prediction datasets for text classification, stock price prediction, etc.


2021 ◽  
Vol 14 (2) ◽  
pp. 245-259
Author(s):  
Daniele Di Sarli ◽  
Claudio Gallicchio ◽  
Alessio Micheli

Recurrent Neural Networks (RNNs) represent a natural paradigm for modeling sequential data like text written in natural language. In fact, RNNs and their variations have long been the architecture of choice in many applications, however in practice they require the use of labored architectures (such as gating mechanisms) and computationally heavy training processes. In this paper we address the question of whether it is possible to generate sentence embeddings via completely untrained recurrent dynamics, on top of which to apply a simple learning algorithm for text classification. This would allow to obtain extremely efficient models in terms of training time. Our work investigates the extent to which this approach can be used, by analyzing the results on different tasks. Finally, we show that, within certain limits, it is possible to build extremely efficient models for text classification that remain competitive in accuracy with reference models in the state-of-the-art.


2018 ◽  
Vol 30 (11) ◽  
pp. 2855-2881 ◽  
Author(s):  
Yingyi Chen ◽  
Qianqian Cheng ◽  
Yanjun Cheng ◽  
Hao Yang ◽  
Huihui Yu

Analysis and forecasting of sequential data, key problems in various domains of engineering and science, have attracted the attention of many researchers from different communities. When predicting the future probability of events using time series, recurrent neural networks (RNNs) are an effective tool that have the learning ability of feedforward neural networks and expand their expression ability using dynamic equations. Moreover, RNNs are able to model several computational structures. Researchers have developed various RNNs with different architectures and topologies. To summarize the work of RNNs in forecasting and provide guidelines for modeling and novel applications in future studies, this review focuses on applications of RNNs for time series forecasting in environmental factor forecasting. We present the structure, processing flow, and advantages of RNNs and analyze the applications of various RNNs in time series forecasting. In addition, we discuss limitations and challenges of applications based on RNNs and future research directions. Finally, we summarize applications of RNNs in forecasting.


Author(s):  
Ching-Fang Lee ◽  
Chao-Tung Yang ◽  
Endah Kristiani ◽  
Yu-Tse Tsan ◽  
Wei-Cheng Chan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document