Fine-grained citation count prediction via a transformer-based model with among-attention mechanism

2022 ◽  
Vol 59 (2) ◽  
pp. 102799
Author(s):  
Shengzhi Huang ◽  
Yong Huang ◽  
Yi Bu ◽  
Wei Lu ◽  
Jiajia Qian ◽  
...  
Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yongyi Li ◽  
Shiqi Wang ◽  
Shuang Dong ◽  
Xueling Lv ◽  
Changzhi Lv ◽  
...  

At present, person reidentification based on attention mechanism has attracted many scholars’ interests. Although attention module can improve the representation ability and reidentification accuracy of Re-ID model to a certain extent, it depends on the coupling of attention module and original network. In this paper, a person reidentification model that combines multiple attentions and multiscale residuals is proposed. The model introduces combined attention fusion module and multiscale residual fusion module in the backbone network ResNet 50 to enhance the feature flow between residual blocks and better fuse multiscale features. Furthermore, a global branch and a local branch are designed and applied to enhance the channel aggregation and position perception ability of the network by utilizing the dual ensemble attention module, as along as the fine-grained feature expression is obtained by using multiproportion block and reorganization. Thus, the global and local features are enhanced. The experimental results on Market-1501 dataset and DukeMTMC-reID dataset show that the indexes of the presented model, especially Rank-1 accuracy, reach 96.20% and 89.59%, respectively, which can be considered as a progress in Re-ID.


Author(s):  
Pau Rodríguez ◽  
Josep M. Gonfaus ◽  
Guillem Cucurull ◽  
F. Xavier Roca ◽  
Jordi Gonzàlez

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5279
Author(s):  
Yang Li ◽  
Huahu Xu ◽  
Junsheng Xiao

Language-based person search retrieves images of a target person using natural language description and is a challenging fine-grained cross-modal retrieval task. A novel hybrid attention network is proposed for the task. The network includes the following three aspects: First, a cubic attention mechanism for person image, which combines cross-layer spatial attention and channel attention. It can fully excavate both important midlevel details and key high-level semantics to obtain better discriminative fine-grained feature representation of a person image. Second, a text attention network for language description, which is based on bidirectional LSTM (BiLSTM) and self-attention mechanism. It can better learn the bidirectional semantic dependency and capture the key words of sentences, so as to extract the context information and key semantic features of the language description more effectively and accurately. Third, a cross-modal attention mechanism and a joint loss function for cross-modal learning, which can pay more attention to the relevant parts between text and image features. It can better exploit both the cross-modal and intra-modal correlation and can better solve the problem of cross-modal heterogeneity. Extensive experiments have been conducted on the CUHK-PEDES dataset. Our approach obtains higher performance than state-of-the-art approaches, demonstrating the advantage of the approach we propose.


Author(s):  
Bowen Xing ◽  
Lejian Liao ◽  
Dandan Song ◽  
Jingang Wang ◽  
Fuzheng Zhang ◽  
...  

Aspect-based sentiment analysis (ABSA) aims to predict fine-grained sentiments of comments with respect to given aspect terms or categories. In previous ABSA methods, the importance of aspect has been realized and verified. Most existing LSTM-based models take aspect into account via the attention mechanism, where the attention weights are calculated after the context is modeled in the form of contextual vectors. However, aspect-related information may be already discarded and aspect-irrelevant information may be retained in classic LSTM cells in the context modeling process, which can be improved to generate more effective context representations. This paper proposes a novel variant of LSTM, termed as aspect-aware LSTM (AA-LSTM), which incorporates aspect information into LSTM cells in the context modeling stage before the attention mechanism. Therefore, our AA-LSTM can dynamically produce aspect-aware contextual representations. We experiment with several representative LSTM-based models by replacing the classic LSTM cells with the AA-LSTM cells. Experimental results on SemEval-2014 Datasets demonstrate the effectiveness of AA-LSTM.


Author(s):  
Qiuxia Lai ◽  
Yu Li ◽  
Ailing Zeng ◽  
Minhao Liu ◽  
Hanqiu Sun ◽  
...  

The selective visual attention mechanism in the human visual system (HVS) restricts the amount of information to reach visual awareness for perceiving natural scenes, allowing near real-time information processing with limited computational capacity. This kind of selectivity acts as an ‘Information Bottleneck (IB)’, which seeks a trade-off between information compression and predictive accuracy. However, such information constraints are rarely explored in the attention mechanism for deep neural networks (DNNs). In this paper, we propose an IB-inspired spatial attention module for DNN structures built for visual recognition. The module takes as input an intermediate representation of the input image, and outputs a variational 2D attention map that minimizes the mutual information (MI) between the attention-modulated representation and the input, while maximizing the MI between the attention-modulated representation and the task label. To further restrict the information bypassed by the attention map, we quantize the continuous attention scores to a set of learnable anchor values during training. Extensive experiments show that the proposed IB-inspired spatial attention mechanism can yield attention maps that neatly highlight the regions of interest while suppressing backgrounds, and bootstrap standard DNN structures for visual recognition tasks (e.g., image classification, fine-grained recognition, cross-domain classification). The attention maps are interpretable for the decision making of the DNNs as verified in the experiments. Our code is available at this https URL.


2020 ◽  
Vol 11 ◽  
Author(s):  
Guofeng Yang ◽  
Yong He ◽  
Yong Yang ◽  
Beibei Xu

Fine-grained image classification is a challenging task because of the difficulty in identifying discriminant features, it is not easy to find the subtle features that fully represent the object. In the fine-grained classification of crop disease, visual disturbances such as light, fog, overlap, and jitter are frequently encountered. To explore the influence of the features of crop leaf images on the classification results, a classification model should focus on the more discriminative regions of the image while improving the classification accuracy of the model in complex scenes. This paper proposes a novel attention mechanism that effectively utilizes the informative regions of an image, and describes the use of transfer learning to quickly construct several fine-grained image classification models of crop disease based on this attention mechanism. This study uses 58,200 crop leaf images as a dataset, including 14 different crops and 37 different categories of healthy/diseased crops. Among them, different diseases of the same crop have strong similarities. The NASNetLarge fine-grained classification model based on the proposed attention mechanism achieves the best classification effect, with an F1 score of up to 93.05%. The results show that the proposed attention mechanism effectively improves the fine-grained classification of crop disease images.


2020 ◽  
Vol 22 (2) ◽  
pp. 502-514 ◽  
Author(s):  
Pau Rodriguez ◽  
Diego Velazquez ◽  
Guillem Cucurull ◽  
Josep M. Gonfaus ◽  
F. Xavier Roca ◽  
...  

2019 ◽  
Vol 1284 ◽  
pp. 012031
Author(s):  
Zhixiao Wang ◽  
Wenyao Yan ◽  
Wendong Wang ◽  
Ang Gao ◽  
Lei Yu ◽  
...  

2021 ◽  
pp. 1-12
Author(s):  
Zuoxi Yang ◽  
Shoubin Dong

Modeling user’s fine-grained preferences and dynamic preference evolution from their chronological behaviors are challenging and crucial for sequential recommendation. In this paper, we develop a Hierarchical Self-Attention Incorporating Knowledge Graph for Sequential Recommendation (HSRec). HSRec models not only the user’s intrinsic preferences but also the user’s external potential interests to capture the user’s fine-grained preferences. Specifically, the intrinsic interest module and potential interest module are designed to capture these two preferences respectively. In the intrinsic interest module, user’s sequential patterns are characterized from their behaviors via the self-attention mechanism. As for the potential interest module, high-order paths can be generated with the help of the knowledge graph. Therefore, a hierarchical self-attention mechanism is designed to aggregate the semantic information of user interaction from these paths. Specifically, an entity-level self-attention mechanism is applied to capture the sequential patterns contained in the high-order paths while an interaction-level self-attention mechanism is designed to further capture the semantic information from user interactions. Moreover, according to the high-order semantic relevance, HSRec can explore the user’s dynamic preferences at each time, thus describing the user’s dynamic preference evolution. Finally, experiments conducted on three real world datasets demonstrate the state-of-the-art performance of the HSRec.


Sign in / Sign up

Export Citation Format

Share Document