Bidirectional Long Short-Term Memory-Based Spatio-Temporal in Community Question Answering

Author(s):  
Nivid Limbasiya ◽  
Prateek Agrawal
2021 ◽  
Author(s):  
Seyed Vahid Moravvej ◽  
Mohammad Javad Maleki Kahaki ◽  
Moein Salimi Sartakhti ◽  
Abdolreza Mirzaei

2021 ◽  
Author(s):  
P. Jiang ◽  
I. Bychkov ◽  
J. Liu ◽  
A. Hmelnov

Forecasting of air pollutant concentration, which is influenced by air pollution accumulation, traffic flow and industrial emissions, has attracted extensive attention for decades. In this paper, we propose a spatio-temporal attention convolutional long short term memory neural networks (Attention-CNN-LSTM) for air pollutant concentration forecasting. Firstly, we analyze the Granger causalities between different stations and establish a hyperparametric Gaussian vector weight function to determine spatial autocorrelation variables, which is used as part of the input feature. Secondly, convolutional neural networks (CNN) is employed to extract the temporal dependence and spatial correlation of the input, while feature maps and channels are weighted by attention mechanism, so as to improve the effectiveness of the features. Finally, a depth long short term memory (LSTM) based time series predictor is established for learning the long-term and short-term dependence of pollutant concentration. In order to reduce the effect of diverse complex factors on LSTM, inherent features are extracted from historical air pollutant concentration data meteorological data and timestamp information are incorporated into the proposed model. Extensive experiments were performed using the Attention-CNNLSTM, autoregressive integrated moving average (ARIMA), support vector regression (SVR), traditional LSTM and CNN, respectively. The results demonstrated that the feasibility and practicability of Attention-CNN-LSTM on estimating CO and NO concentration.


Author(s):  
Peng Wang ◽  
Qi Wu ◽  
Chunhua Shen ◽  
Anthony Dick ◽  
Anton van den Hengel

We describe a method for visual question answering which is capable of reasoning about an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can explain the reasoning by which it developed its answer. It is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in testing. We also provide a dataset and a protocol by which to evaluate general visual question answering methods.


Author(s):  
Weijie Yang ◽  
Hong Ma

In this paper, for the Chinese automatic question answering technology in open domain, in addition to considering the traditional association between questions and questions, the correlation between questions and answers is added. The cosine similarity between questions and answers is used as the semantic similarity between them. A bi-directional long short-term memory network (BiLSTM) is added between the question and question, answer and the answer to seek the association between the contexts. and an attention mechanism is added to make question and answer related. Finally, the experimental verification shows that the accuracy of automatic question answering by the proposed method reaches 70%.


The objective is to develop a time series image representation of the skeletal action data and use it for recognition through a convolutional long short-term deep learning framework. Consequently, Kinect captured human skeletal data is transformed into a Joint Change Distance Image (JCDI) descriptor which maps the time changes in the joints. Subsequently, JCDIs are decoded spatially well with a Convolutional (CNN). Temporal decomposition is executed on long short term memory (LSTM) with data changes along x , y and z position vectors of the skeleton. We propose a combination of CNN and LSTM which maps the spatio temporal information to generate a generalized time series features for recognition. Finally, scores are fused from spatially vibrant CNNs and temporally sound LSTMs for action recognition. Publicly available action datasets such as NTU RGBD, MSR Action, UTKinect and G3D were used as test inputs for experimentation. The results showed a better performance due to spatio temporal modeling at both the representation and the recognition stages when compared to other state-of-the-arts


Sign in / Sign up

Export Citation Format

Share Document