scholarly journals DAN: Deep Attention Neural Network for News Recommendation

Author(s):  
Qiannan Zhu ◽  
Xiaofei Zhou ◽  
Zeliang Song ◽  
Jianlong Tan ◽  
Li Guo

With the rapid information explosion of news, making personalized news recommendation for users becomes an increasingly challenging problem. Many existing recommendation methods that regard the recommendation procedure as the static process, have achieved better recommendation performance. However, they usually fail with the dynamic diversity of news and user’s interests, or ignore the importance of sequential information of user’s clicking selection. In this paper, taking full advantages of convolution neural network (CNN), recurrent neural network (RNN) and attention mechanism, we propose a deep attention neural network DAN for news recommendation. Our DAN model presents to use attention-based parallel CNN for aggregating user’s interest features and attention-based RNN for capturing richer hidden sequential features of user’s clicks, and combines these features for new recommendation. We conduct experiment on real-world news data sets, and the experimental results demonstrate the superiority and effectiveness of our proposed DAN model.

2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Yan Chu ◽  
Xiao Yue ◽  
Lei Yu ◽  
Mikhailov Sergei ◽  
Zhengkui Wang

Captioning the images with proper descriptions automatically has become an interesting and challenging problem. In this paper, we present one joint model AICRL, which is able to conduct the automatic image captioning based on ResNet50 and LSTM with soft attention. AICRL consists of one encoder and one decoder. The encoder adopts ResNet50 based on the convolutional neural network, which creates an extensive representation of the given image by embedding it into a fixed length vector. The decoder is designed with LSTM, a recurrent neural network and a soft attention mechanism, to selectively focus the attention over certain parts of an image to predict the next sentence. We have trained AICRL over a big dataset MS COCO 2014 to maximize the likelihood of the target description sentence given the training images and evaluated it in various metrics like BLEU, METEROR, and CIDEr. Our experimental results indicate that AICRL is effective in generating captions for the images.


2019 ◽  
Vol 11 (12) ◽  
pp. 247
Author(s):  
Xin Zhou ◽  
Peixin Dong ◽  
Jianping Xing ◽  
Peijia Sun

Accurate prediction of bus arrival times is a challenging problem in the public transportation field. Previous studies have shown that to improve prediction accuracy, more heterogeneous measurements provide better results. So what other factors should be added into the prediction model? Traditional prediction methods mainly use the arrival time and the distance between stations, but do not make full use of dynamic factors such as passenger number, dwell time, bus driving efficiency, etc. We propose a novel approach that takes full advantage of dynamic factors. Our approach is based on a Recurrent Neural Network (RNN). The experimental results indicate that a variety of prediction algorithms (such as Support Vector Machine, Kalman filter, Multilayer Perceptron, and RNN) have significantly improved performance after using dynamic factors. Further, we introduce RNN with an attention mechanism to adaptively select the most relevant input factors. Experiments demonstrate that the prediction accuracy of RNN with an attention mechanism is better than RNN with no attention mechanism when there are heterogeneous input factors. The experimental results show the superior performances of our approach on the data set provided by Jinan Public Transportation Corporation.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Haiyan Wang ◽  
Kaiming Yao ◽  
Jian Luo ◽  
Yi Lin

Sequential recommendation system has received widespread attention due to its good performance in solving data overload. However, most of the sequential recommendation methods assume that user’s preferences only depend on specific items in the current sequence and do not consider user’s implicit interests. In addition, most of the previous works mainly focus on exploiting relationships between items in the sequence and seldom consider quantifying the degree of preferences for items implied by user’s different behaviors. In order to address these above two problems, we propose an implicit preference-aware sequential recommendation method based on knowledge graph (IPAKG). Firstly, this method introduces knowledge graph to exploit user’s implicit preference representations. Secondly, we integrate recurrent neural network and attention mechanism to capture user’s evolving interests and relationships between different items in the sequence. Thirdly, we introduce the concept of behavior intensity and design a behavior activation unit to exploit the degree of preferences for items implied by a user’s different behaviors. Through the activation unit, the user’s preferences on different items are further quantified. Finally, we conduct experiments on an Amazon electronics dataset and Tmall dataset to evaluate the performance of our method. Experimental results demonstrate that our proposed method has better performance than those baseline methods.


2020 ◽  
Vol 131 ◽  
pp. 291-299 ◽  
Author(s):  
Hang Su ◽  
Yingbai Hu ◽  
Hamid Reza Karimi ◽  
Alois Knoll ◽  
Giancarlo Ferrigno ◽  
...  

2017 ◽  
Vol 29 (4) ◽  
pp. 685-696 ◽  
Author(s):  
Adi Sujiwo ◽  
Eijiro Takeuchi ◽  
Luis Yoichi Morales ◽  
Naoki Akai ◽  
Hatem Darweesh ◽  
...  

This paper describes our approach to perform robust monocular camera metric localization in the dynamic environments of Tsukuba Challenge 2016. We address two issues related to vision-based navigation. First, we improved the coverage by building a custom vocabulary out of the scene and improving upon place recognition routine which is key for global localization. Second, we established possibility of lifelong localization by using previous year’s map. Experimental results show that localization coverage was higher than 90% for six different data sets taken in different years, while localization average errors were under 0.2 m. Finally, the average of coverage for data sets tested with maps taken in different years was of 75%.


Sign in / Sign up

Export Citation Format

Share Document