scholarly journals Leveraging Title-Abstract Attentive Semantics for Paper Recommendation

2020 ◽  
Vol 34 (01) ◽  
pp. 67-74
Author(s):  
Guibing Guo ◽  
Bowei Chen ◽  
Xiaoyan Zhang ◽  
Zhirong Liu ◽  
Zhenhua Dong ◽  
...  

Paper recommendation is a research topic to provide users with personalized papers of interest. However, most existing approaches equally treat title and abstract as the input to learn the representation of a paper, ignoring their semantic relationship. In this paper, we regard the abstract as a sequence of sentences, and propose a two-level attentive neural network to capture: (1) the ability of each word within a sentence to reflect if it is semantically close to the words within the title. (2) the extent of each sentence in the abstract relative to the title, which is often a good summarization of the abstract document. Specifically, we propose a Long-Short Term Memory (LSTM) network with attention to learn the representation of sentences, and integrate a Gated Recurrent Unit (GRU) network with a memory network to learn the long-term sequential sentence patterns of interacted papers for both user and item (paper) modeling. We conduct extensive experiments on two real datasets, and show that our approach outperforms other state-of-the-art approaches in terms of accuracy.

2006 ◽  
Vol 15 (04) ◽  
pp. 623-650
Author(s):  
JUDY A. FRANKLIN

Recurrent (neural) networks have been deployed as models for learning musical processes, by computational scientists who study processes such as dynamic systems. Over time, more intricate music has been learned as the state of the art in recurrent networks improves. One particular recurrent network, the Long Short-Term Memory (LSTM) network shows promise for learning long songs, and generating new songs. We are experimenting with a module containing two inter-recurrent LSTM networks to cooperatively learn several human melodies, based on the songs' harmonic structures, and on the feedback inherent in the network. We show that these networks can learn to reproduce four human melodies. We then present as input new harmonizations, so as to generate new songs. We describe the reharmonizations, and show the new melodies that result. We also present a hierarchical structure for using reinforcement learning to choose LSTM modules during the course of melody generation.


2019 ◽  
Vol 239 ◽  
pp. 181-191 ◽  
Author(s):  
Shuang Han ◽  
Yan-hui Qiao ◽  
Jie Yan ◽  
Yong-qian Liu ◽  
Li Li ◽  
...  

2021 ◽  
Vol 11 (20) ◽  
pp. 9708
Author(s):  
Xiaole Cheng ◽  
Te Han ◽  
Peilin Yang ◽  
Xugang Zhang

As an important condition for fatigue analysis and life prediction, load spectrum is widely used in various engineering fields. The extrapolation of load samples is an important step in compiling load spectrum. It is of great significance to select an appropriate load extrapolation method. This paper proposes a load extrapolation method based on long short-term memory (LSTM) network, introduces the basic principle of the extrapolation method, and applies the method to the data set collected under the working state of 5MN metal extruder. The comparison between the extrapolated load data and the actual load shows that the trend of the extrapolated load data is basically consistent with the original tendency. In addition, this method is compared with the rain flow extrapolation method based on statistical distribution. Through the comparison of the short-term load spectrum compiled by the two extrapolation methods, it is found that the load spectrum extrapolation method based on LSTM network can better realize load prediction and optimize the compilation of load spectrum.


Author(s):  
Xiangyang Li ◽  
Shuqiang Jiang ◽  
Jungong Han

Dense captioning is a challenging task which not only detects visual elements in images but also generates natural language sentences to describe them. Previous approaches do not leverage object information in images for this task. However, objects provide valuable cues to help predict the locations of caption regions as caption regions often highly overlap with objects (i.e. caption regions are usually parts of objects or combinations of them). Meanwhile, objects also provide important information for describing a target caption region as the corresponding description not only depicts its properties, but also involves its interactions with objects in the image. In this work, we propose a novel scheme with an object context encoding Long Short-Term Memory (LSTM) network to automatically learn complementary object context for each caption region, transferring knowledge from objects to caption regions. All contextual objects are arranged as a sequence and progressively fed into the context encoding module to obtain context features. Then both the learned object context features and region features are used to predict the bounding box offsets and generate the descriptions. The context learning procedure is in conjunction with the optimization of both location prediction and caption generation, thus enabling the object context encoding LSTM to capture and aggregate useful object context. Experiments on benchmark datasets demonstrate the superiority of our proposed approach over the state-of-the-art methods.


Author(s):  
Sen Su ◽  
Ningning Jia ◽  
Xiang Cheng ◽  
Shuguang Zhu ◽  
Ruiping Li

In this paper, we present an encoder-decoder model for distant supervised relation extraction. Given an entity pair and its sentence bag as input, in the encoder component, we employ the convolutional neural network to extract the features of the sentences in the sentence bag and merge them into a bag representation. In the decoder component, we utilize the long short-term memory network to model relation dependencies and predict the target relations in a sequential manner. In particular, to enable the sequential prediction of relations, we introduce a measure to quantify the amounts of information the relations take in their sentence bag, and use such information to determine the order of the relations of a sentence bag during model training. Moreover, we incorporate the attention mechanism into our model to dynamically adjust the bag representation to reduce the impact of sentences whose corresponding relations have been predicted. Extensive experiments on a popular dataset show that our model achieves significant improvement over state-of-the-art methods.


2020 ◽  
Vol 10 (11) ◽  
pp. 3984 ◽  
Author(s):  
Khaula Qadeer ◽  
Wajih Ur Rehman ◽  
Ahmad Muqeem Sheri ◽  
Inyoung Park ◽  
Hong Kook Kim ◽  
...  

Air pollution not only damages the environment but also leads to various illnesses such as respiratory tract and cardiovascular diseases. Nowadays, estimating air pollutants concentration is becoming very important so that people can prepare themselves for the hazardous impact of air pollution beforehand. Various deterministic models have been used to forecast air pollution. In this study, along with various pollutants and meteorological parameters, we also use the concentration of the pollutants predicted by the community multiscale air quality (CMAQ) model which are strongly related to PM 2.5 concentration. After combining these parameters, we implement various machine learning models to predict the hourly forecast of PM 2.5 concentration in two big cities of South Korea and compare their results. It has been shown that Long Short Term Memory network outperforms other well-known gradient tree boosting models, recurrent, and convolutional neural networks.


Sign in / Sign up

Export Citation Format

Share Document