scholarly journals TextGTL: Graph-based Transductive Learning for Semi-supervised Text Classification via Structure-Sensitive Interpolation

Author(s):  
Chen Li ◽  
Xutan Peng ◽  
Hao Peng ◽  
Jianxin Li ◽  
Lihong Wang

Compared with traditional sequential learning models, graph-based neural networks exhibit excellent properties when encoding text, such as the capacity of capturing global and local information simultaneously. Especially in the semi-supervised scenario, propagating information along the edge can effectively alleviate the sparsity of labeled data. In this paper, beyond the existing architecture of heterogeneous word-document graphs, for the first time, we investigate how to construct lightweight non-heterogeneous graphs based on different linguistic information to better serve free text representation learning. Then, a novel semi-supervised framework for text classification that refines graph topology under theoretical guidance and shares information across different text graphs, namely Text-oriented Graph-based Transductive Learning (TextGTL), is proposed. TextGTL also performs attribute space interpolation based on dense substructure in graphs to predict low-entropy labels with high-quality feature nodes for data augmentation. To verify the effectiveness of TextGTL, we conduct extensive experiments on various benchmark datasets, observing significant performance gains over conventional heterogeneous graphs. In addition, we also design ablation studies to dive deep into the validity of components in TextTGL.

2019 ◽  
Vol 9 (6) ◽  
pp. 1239 ◽  
Author(s):  
Hua Gao ◽  
Shengyong Chen ◽  
Zhaosheng Zhang

Person re-identification is a typical computer vision problem which aims at matching pedestrians across disjoint camera views. It is challenging due to the misalignment of body parts caused by pose variations, background clutter, detection errors, camera point of view variation, different accessories and occlusion. In this paper, we propose a person re-identification network which fuses global and local features, to deal with part misalignment problem. The network is a four-branch convolutional neural network (CNN) which learns global person appearance and local features of three human body parts respectively. Local patches, including the head, torso and lower body, are segmented by using a U_Net semantic segmentation CNN architecture. All four feature maps are then concatenated and fused to represent a person image. We propose a DropParts method to solve the parts missing problem, with which the local features are weighed according to the number of parts found by semantic segmentation. Since three body parts are well aligned, the approach significantly improves person re-identification. Experiments on the standard benchmark datasets, such as Market1501, CUHK03 and DukeMTMC-reID datasets, show the effectiveness of our proposed pipeline.


2021 ◽  
Vol 24 (4-5) ◽  
pp. 347-369
Author(s):  
Zaiqiao Meng ◽  
Richard McCreadie ◽  
Craig Macdonald ◽  
Iadh Ounis

AbstractRepresentation learning has been widely applied in real-world recommendation systems to capture the features of both users and items. Existing grocery recommendation methods only represent each user and item by single deterministic points in a low-dimensional continuous space, which limit the expressive ability of their embeddings, resulting in recommendation performance bottlenecks. In addition, existing representation learning methods for grocery recommendation only consider the items (products) as independent entities, neglecting their other valuable side information, such as the textual descriptions and the categorical data of items. In this paper, we propose the Variational Bayesian Context-Aware Representation (VBCAR) model for grocery recommendation. VBCAR is a novel variational Bayesian model that learns distributional representations of users and items by leveraging basket context information from historical interactions. Our VBCAR model is also extendable to leverage side information by encoding contextual features into representations based on the inference encoder. We conduct extensive experiments on three real-world grocery datasets to assess the effectiveness of our model as well as the impact of different construction strategies for item side information. Our results show that our VBCAR model outperforms the current state-of-the-art grocery recommendation models while integrating item side information (especially the categorical features with the textual information of items) results in further significant performance gains. Furthermore, we demonstrate through analysis that our model is able to effectively encode similarities between product types, which we argue is the primary reason for the observed effectiveness gains.


2020 ◽  
Vol 24 ◽  
pp. 15-41
Author(s):  
Margarita Bugueño ◽  
Marcelo Mendoza

Text classification is a fairly explored task that has allowed dealing with a considerable amount of problems. However, one of its main difficulties is to conduct a learning process in data with class imbalance, i.e., datasets with only a few examples in some classes, which often represent the most interesting cases for the task. In this context, text classifiers overfit some particular classes, showing poor performance. To address this problem, we propose a scheme that combines the outputs of different classifiers, coding them in the encoder of a transformer. Feeding also a BERT encoding of each example, the encoder learns a joint representation of the text and the outputs of the classifiers. These encodings are used to train a new text classifier. Since the transformer is a highly complex model, we introduce a data augmentation technique, which allows the representation learning task to be driven without over-fitting the encoding to a particular class. The data augmentation technique also allows for producing a balanced dataset. The combination of both methods, representation learning, and data augmentation, allows improving the performance of trained classifiers. Results in benchmark data for two text classification tasks (stance classification and online harassment detection) show that the proposed scheme outperforms all of its direct competitors.


Author(s):  
Honglun Zhang ◽  
Liqiang Xiao ◽  
Yongkun Wang ◽  
Yaohui Jin

Multi-task learning leverages potential correlations among related tasks to extract common features and yield performance gains. However, most previous works only consider simple or weak interactions, thereby failing to model complex correlations among three or more tasks. In this paper, we propose a multi-task learning architecture with four types of recurrent neural layers to fuse information across multiple related tasks. The architecture is structurally flexible and considers various interactions among tasks, which can be regarded as a generalized case of many previous works. Extensive experiments on five benchmark datasets for text classification show that our model can significantly improve performances of related tasks with additional information from others.


Author(s):  
Wanshan Zheng ◽  
Zibin Zheng ◽  
Hai Wan ◽  
Chuan Chen

Representation learning and feature aggregation are usually the two key intermediate steps in natural language processing. Despite deep neural networks have shown strong performance in the text classification task, they are unable to learn adaptive structure features automatically and lack of a method for fully utilizing the extracted features. In this paper, we propose a novel architecture that dynamically routes hierarchical structure feature to attentive capsule, named HAC. Specifically, we first adopt intermediate information of a well-designed deep dilated CNN to form hierarchical structure features. Different levels of structure representations are corresponding to various linguistic units such as word, phrase and clause, respectively. Furthermore, we design a capsule module using dynamic routing and equip it with an attention mechanism. The attentive capsule implements an effective aggregation strategy for feature clustering and selection. Extensive results on eleven benchmark datasets demonstrate that the proposed model obtains competitive performance against several state-of-the-art baselines. Our code is available at https://github.com/zhengwsh/HAC.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4666
Author(s):  
Zhiqiang Pan ◽  
Honghui Chen

Collaborative filtering (CF) aims to make recommendations for users by detecting user’s preference from the historical user–item interactions. Existing graph neural networks (GNN) based methods achieve satisfactory performance by exploiting the high-order connectivity between users and items, however they suffer from the poor training efficiency problem and easily introduce bias for information propagation. Moreover, the widely applied Bayesian personalized ranking (BPR) loss is insufficient to provide supervision signals for training due to the extremely sparse observed interactions. To deal with the above issues, we propose the Efficient Graph Collaborative Filtering (EGCF) method. Specifically, EGCF adopts merely one-layer graph convolution to model the collaborative signal for users and items from the first-order neighbors in the user–item interactions. Moreover, we introduce contrastive learning to enhance the representation learning of users and items by deriving the self-supervisions, which is jointly trained with the supervised learning. Extensive experiments are conducted on two benchmark datasets, i.e., Yelp2018 and Amazon-book, and the experimental results demonstrate that EGCF can achieve the state-of-the-art performance in terms of Recall and normalized discounted cumulative gain (NDCG), especially on ranking the target items at right positions. In addition, EGCF shows obvious advantages in the training efficiency compared with the competitive baselines, making it practicable for potential applications.


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Mengyu Xu ◽  
Zhenmin Tang ◽  
Yazhou Yao ◽  
Lingxiang Yao ◽  
Huafeng Liu ◽  
...  

Due to the variations of viewpoint, pose, and illumination, a given individual may appear considerably different across different camera views. Tracking individuals across camera networks with no overlapping fields is still a challenging problem. Previous works mainly focus on feature representation and metric learning individually which tend to have a suboptimal solution. To address this issue, in this work, we propose a novel framework to do the feature representation learning and metric learning jointly. Different from previous works, we represent the pairs of pedestrian images as new resized input and use linear Support Vector Machine to replace softmax activation function for similarity learning. Particularly, dropout and data augmentation techniques are also employed in this model to prevent the network from overfitting. Extensive experiments on two publically available datasets VIPeR and CUHK01 demonstrate the effectiveness of our proposed approach.


Author(s):  
Cunxiao Du ◽  
Zhaozheng Chen ◽  
Fuli Feng ◽  
Lei Zhu ◽  
Tian Gan ◽  
...  

Text classification is one of the fundamental tasks in natural language processing. Recently, deep neural networks have achieved promising performance in the text classification task compared to shallow models. Despite of the significance of deep models, they ignore the fine-grained (matching signals between words and classes) classification clues since their classifications mainly rely on the text-level representations. To address this problem, we introduce the interaction mechanism to incorporate word-level matching signals into the text classification task. In particular, we design a novel framework, EXplicit interAction Model (dubbed as EXAM), equipped with the interaction mechanism. We justified the proposed approach on several benchmark datasets including both multilabel and multi-class text classification tasks. Extensive experimental results demonstrate the superiority of the proposed method. As a byproduct, we have released the codes and parameter settings to facilitate other researches.


Sign in / Sign up

Export Citation Format

Share Document