scholarly journals Exploiting Positional Information for Session-Based Recommendation

2022 ◽  
Vol 40 (2) ◽  
pp. 1-24
Author(s):  
Ruihong Qiu ◽  
Zi Huang ◽  
Tong Chen ◽  
Hongzhi Yin

For present e-commerce platforms, it is important to accurately predict users’ preference for a timely next-item recommendation. To achieve this goal, session-based recommender systems are developed, which are based on a sequence of the most recent user-item interactions to avoid the influence raised from outdated historical records. Although a session can usually reflect a user’s current preference, a local shift of the user’s intention within the session may still exist. Specifically, the interactions that take place in the early positions within a session generally indicate the user’s initial intention, while later interactions are more likely to represent the latest intention. Such positional information has been rarely considered in existing methods, which restricts their ability to capture the significance of interactions at different positions. To thoroughly exploit the positional information within a session, a theoretical framework is developed in this paper to provide an in-depth analysis of the positional information. We formally define the properties of forward-awareness and backward-awareness to evaluate the ability of positional encoding schemes in capturing the initial and the latest intention. According to our analysis, existing positional encoding schemes are generally forward-aware only, which can hardly represent the dynamics of the intention in a session. To enhance the positional encoding scheme for the session-based recommendation, a dual positional encoding (DPE) is proposed to account for both forward-awareness and backward-awareness . Based on DPE, we propose a novel Positional Recommender (PosRec) model with a well-designed Position-aware Gated Graph Neural Network module to fully exploit the positional information for session-based recommendation tasks. Extensive experiments are conducted on two e-commerce benchmark datasets, Yoochoose and Diginetica and the experimental results show the superiority of the PosRec by comparing it with the state-of-the-art session-based recommender models.

2021 ◽  
Vol 70 ◽  
pp. 545-566
Author(s):  
Yongjing Yin ◽  
Shaopeng Lai ◽  
Linfeng Song ◽  
Chulun Zhou ◽  
Xianpei Han ◽  
...  

As an important text coherence modeling task, sentence ordering aims to coherently organize a given set of unordered sentences. To achieve this goal, the most important step is to effectively capture and exploit global dependencies among these sentences. In this paper, we propose a novel and flexible external knowledge enhanced graph-based neural network for sentence ordering. Specifically, we first represent the input sentences as a graph, where various kinds of relations (i.e., entity-entity, sentence-sentence and entity-sentence) are exploited to make the graph representation more expressive and less noisy. Then, we introduce graph recurrent network to learn semantic representations of the sentences. To demonstrate the effectiveness of our model, we conduct experiments on several benchmark datasets. The experimental results and in-depth analysis show our model significantly outperforms the existing state-of-the-art models.


2020 ◽  
Vol 10 (15) ◽  
pp. 5326
Author(s):  
Xiaolei Diao ◽  
Xiaoqiang Li ◽  
Chen Huang

The same action takes different time in different cases. This difference will affect the accuracy of action recognition to a certain extent. We propose an end-to-end deep neural network called “Multi-Term Attention Networks” (MTANs), which solves the above problem by extracting temporal features with different time scales. The network consists of a Multi-Term Attention Recurrent Neural Network (MTA-RNN) and a Spatio-Temporal Convolutional Neural Network (ST-CNN). In MTA-RNN, a method for fusing multi-term temporal features are proposed to extract the temporal dependence of different time scales, and the weighted fusion temporal feature is recalibrated by the attention mechanism. Ablation research proves that this network has powerful spatio-temporal dynamic modeling capabilities for actions with different time scales. We perform extensive experiments on four challenging benchmark datasets, including the NTU RGB+D dataset, UT-Kinect dataset, Northwestern-UCLA dataset, and UWA3DII dataset. Our method achieves better results than the state-of-the-art benchmarks, which demonstrates the effectiveness of MTANs.


2020 ◽  
Author(s):  
Muhammad Nabeel Asim ◽  
Andreas Dengel ◽  
Sheraz Ahmed

ABSTRACTMicroRNAs are special RNA sequences containing 22 nucleotides and are capable of regulating almost 60% of highly complex mammalian transcriptome. Presently, there exists very limited approaches capable of visualizing miRNA locations inside cell to reveal the hidden pathways, and mechanisms behind miRNA functionality, transport, and biogenesis. State-of-the-art miRNA sub-cellular location prediction MIRLocatar approach makes use of sequence to sequence model along with pre-train k-mer embeddings. Existing pre-train k-mer embedding generation methodologies focus on the extraction of semantics of k-mers. In RNA sequences, rather than semantics, positional information of nucleotides is more important because distinct positions of four basic nucleotides actually define the functionality of RNA molecules. Considering the dynamicity and importance of nucleotides positions, instead of learning representation on the basis of k-mers semantics, we propose a novel kmerRP2vec feature representation approach that fuses positional information of k-mers to randomly initialized neural k-mer embeddings. Effectiveness of proposed feature representation approach is evaluated with two deep learning based convolutional neural network CNN and recurrent neural network RNN methodologies using 8 evaluation measures. Experimental results on a public benchmark miRNAsubloc dataset prove that proposed kmerRP2vec approach along with a simple CNN model outperforms state-of-the-art MirLocator approach with a significant margin of 18% and 19% in terms of precision and recall.


2020 ◽  
Vol 32 (23) ◽  
pp. 17309-17320
Author(s):  
Rolandos Alexandros Potamias ◽  
Georgios Siolas ◽  
Andreas - Georgios Stafylopatis

AbstractFigurative language (FL) seems ubiquitous in all social media discussion forums and chats, posing extra challenges to sentiment analysis endeavors. Identification of FL schemas in short texts remains largely an unresolved issue in the broader field of natural language processing, mainly due to their contradictory and metaphorical meaning content. The main FL expression forms are sarcasm, irony and metaphor. In the present paper, we employ advanced deep learning methodologies to tackle the problem of identifying the aforementioned FL forms. Significantly extending our previous work (Potamias et al., in: International conference on engineering applications of neural networks, Springer, Berlin, pp 164–175, 2019), we propose a neural network methodology that builds on a recently proposed pre-trained transformer-based network architecture which is further enhanced with the employment and devise of a recurrent convolutional neural network. With this setup, data preprocessing is kept in minimum. The performance of the devised hybrid neural architecture is tested on four benchmark datasets, and contrasted with other relevant state-of-the-art methodologies and systems. Results demonstrate that the proposed methodology achieves state-of-the-art performance under all benchmark datasets, outperforming, even by a large margin, all other methodologies and published studies.


Author(s):  
Sandareka Wickramanayake ◽  
Wynne Hsu ◽  
Mong Li Lee

Explaining the decisions of a Deep Learning Network is imperative to safeguard end-user trust. Such explanations must be intuitive, descriptive, and faithfully explain why a model makes its decisions. In this work, we propose a framework called FLEX (Faithful Linguistic EXplanations) that generates post-hoc linguistic justifications to rationalize the decision of a Convolutional Neural Network. FLEX explains a model’s decision in terms of features that are responsible for the decision. We derive a novel way to associate such features to words, and introduce a new decision-relevance metric that measures the faithfulness of an explanation to a model’s reasoning. Experiment results on two benchmark datasets demonstrate that the proposed framework can generate discriminative and faithful explanations compared to state-of-the-art explanation generators. We also show how FLEX can generate explanations for images of unseen classes as well as automatically annotate objects in images.


Author(s):  
Yuqing Ma ◽  
Shihao Bai ◽  
Shan An ◽  
Wei Liu ◽  
Aishan Liu ◽  
...  

Few-shot learning, aiming to learn novel concepts from few labeled examples, is an interesting and very challenging problem with many practical advantages. To accomplish this task, one should concentrate on revealing the accurate relations of the support-query pairs. We propose a transductive relation-propagation graph neural network (TRPN) to explicitly model and propagate such relations across support-query pairs. Our TRPN treats the relation of each support-query pair as a graph node, named relational node, and resorts to the known relations between support samples, including both intra-class commonality and inter-class uniqueness, to guide the relation propagation in the graph, generating the discriminative relation embeddings for support-query pairs. A pseudo relational node is further introduced to propagate the query characteristics, and a fast, yet effective transductive learning strategy is devised to fully exploit the relation information among different queries. To the best of our knowledge, this is the first work that explicitly takes the relations of support-query pairs into consideration in few-shot learning, which might offer a new way to solve the few-shot learning problem. Extensive experiments conducted on several benchmark datasets demonstrate that our method can significantly outperform a variety of state-of-the-art few-shot learning methods.


2021 ◽  
Author(s):  
Omar Nada

<div>Session-based recommendation is the task of predicting user actions during short online sessions. Previous work considers the user to be anonymous in this setting, with no past behavior history available. In reality, this is often not the case, and none of the existing approaches are flexible enough to seamlessly integrate user history when available. In this thesis, we propose a novel hybrid session-based recommender system to perform next-click prediction, which is able to take advantage of historical user preferences when accessible. Specifically, we propose SessNet, a deep profiling session-based recommender system, with a two-stage dichotomy. First, we use bidirectional transformers to model local and global session intent. Second, we concatenate any user information with the current session representation to feed to a feed-forward neural network to identify the next click. Historical user preferences are computed using the sequence-aware embeddings obtained from the first step, allowing us to better understand the users. We evaluate the efficacy of the proposed method using two benchmark datasets, YooChoose1/64 and Dignetica. Our experimental results show that SessNet outperforms state-of-the-art session-based recommenders on P@20 for both datasets.</div>


2020 ◽  
Vol 1 (6) ◽  
Author(s):  
Pablo Barros ◽  
Nikhil Churamani ◽  
Alessandra Sciutti

AbstractCurrent state-of-the-art models for automatic facial expression recognition (FER) are based on very deep neural networks that are effective but rather expensive to train. Given the dynamic conditions of FER, this characteristic hinders such models of been used as a general affect recognition. In this paper, we address this problem by formalizing the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks. We introduce an inhibitory layer that helps to shape the learning of facial features in the last layer of the network and, thus, improving performance while reducing the number of trainable parameters. To evaluate our model, we perform a series of experiments on different benchmark datasets and demonstrate how the FaceChannel achieves a comparable, if not better, performance to the current state-of-the-art in FER. Our experiments include cross-dataset analysis, to estimate how our model behaves on different affective recognition conditions. We conclude our paper with an analysis of how FaceChannel learns and adapts the learned facial features towards the different datasets.


Author(s):  
Seonhoon Kim ◽  
Inho Kang ◽  
Nojun Kwak

Sentence matching is widely used in various natural language tasks such as natural language inference, paraphrase identification, and question answering. For these tasks, understanding logical and semantic relationship between two sentences is required but it is yet challenging. Although attention mechanism is useful to capture the semantic relationship and to properly align the elements of two sentences, previous methods of attention mechanism simply use a summation operation which does not retain original features enough. Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and the co-attentive feature information from the bottommost word embedding layer to the uppermost recurrent layer. To alleviate the problem of an ever-increasing size of feature vectors due to dense concatenation operations, we also propose to use an autoencoder after dense concatenation. We evaluate our proposed architecture on highly competitive benchmark datasets related to sentence matching. Experimental results show that our architecture, which retains recurrent and attentive features, achieves state-of-the-art performances for most of the tasks.


Author(s):  
Pieter Van Molle ◽  
Tim Verbelen ◽  
Bert Vankeirsbilck ◽  
Jonas De Vylder ◽  
Bart Diricx ◽  
...  

AbstractModern deep learning models achieve state-of-the-art results for many tasks in computer vision, such as image classification and segmentation. However, its adoption into high-risk applications, e.g. automated medical diagnosis systems, happens at a slow pace. One of the main reasons for this is that regular neural networks do not capture uncertainty. To assess uncertainty in classification, several techniques have been proposed casting neural network approaches in a Bayesian setting. Amongst these techniques, Monte Carlo dropout is by far the most popular. This particular technique estimates the moments of the output distribution through sampling with different dropout masks. The output uncertainty of a neural network is then approximated as the sample variance. In this paper, we highlight the limitations of such a variance-based uncertainty metric and propose an novel approach. Our approach is based on the overlap between output distributions of different classes. We show that our technique leads to a better approximation of the inter-class output confusion. We illustrate the advantages of our method using benchmark datasets. In addition, we apply our metric to skin lesion classification—a real-world use case—and show that this yields promising results.


Sign in / Sign up

Export Citation Format

Share Document