Current Political News Translation Model Based on Attention Mechanism

Author(s):  
Xixi Luo ◽  
Jiaqi Yan ◽  
Xinyu Chen ◽  
Yingjiang Wu ◽  
Ke Wu ◽  
...  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Wenxia Pan

English machine translation is a natural language processing research direction that has important scientific research value and practical value in the current artificial intelligence boom. The variability of language, the limited ability to express semantic information, and the lack of parallel corpus resources all limit the usefulness and popularity of English machine translation in practical applications. The self-attention mechanism has received a lot of attention in English machine translation tasks because of its highly parallelizable computing ability, which reduces the model’s training time and allows it to capture the semantic relevance of all words in the context. The efficiency of the self-attention mechanism, however, differs from that of recurrent neural networks because it ignores the position and structure information between context words. The English machine translation model based on the self-attention mechanism uses sine and cosine position coding to represent the absolute position information of words in order to enable the model to use position information between words. This method, on the other hand, can reflect relative distance but does not provide directionality. As a result, a new model of English machine translation is proposed, which is based on the logarithmic position representation method and the self-attention mechanism. This model retains the distance and directional information between words, as well as the efficiency of the self-attention mechanism. Experiments show that the nonstrict phrase extraction method can effectively extract phrase translation pairs from the n-best word alignment results and that the extraction constraint strategy can improve translation quality even further. Nonstrict phrase extraction methods and n-best alignment results can significantly improve the quality of translation translations when compared to traditional phrase extraction methods based on single alignment.


2021 ◽  
pp. 1-12
Author(s):  
Lv YE ◽  
Yue Yang ◽  
Jian-Xu Zeng

The existing recommender system provides personalized recommendation service for users in online shopping, entertainment, and other activities. In order to improve the probability of users accepting the system’s recommendation service, compared with the traditional recommender system, the interpretable recommender system will give the recommendation reasons and results at the same time. In this paper, an interpretable recommendation model based on XGBoost tree is proposed to obtain comprehensible and effective cross features from side information. The results are input into the embedded model based on attention mechanism to capture the invisible interaction among user IDs, item IDs and cross features. The captured interactions are used to predict the match score between the user and the recommended item. Cross-feature attention score is used to generate different recommendation reasons for different user-items.Experimental results show that the proposed algorithm can guarantee the quality of recommendation. The transparency and readability of the recommendation process has been improved by providing reference reasons. This method can help users better understand the recommendation behavior of the system and has certain enlightenment to help the recommender system become more personalized and intelligent.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yongyi Li ◽  
Shiqi Wang ◽  
Shuang Dong ◽  
Xueling Lv ◽  
Changzhi Lv ◽  
...  

At present, person reidentification based on attention mechanism has attracted many scholars’ interests. Although attention module can improve the representation ability and reidentification accuracy of Re-ID model to a certain extent, it depends on the coupling of attention module and original network. In this paper, a person reidentification model that combines multiple attentions and multiscale residuals is proposed. The model introduces combined attention fusion module and multiscale residual fusion module in the backbone network ResNet 50 to enhance the feature flow between residual blocks and better fuse multiscale features. Furthermore, a global branch and a local branch are designed and applied to enhance the channel aggregation and position perception ability of the network by utilizing the dual ensemble attention module, as along as the fine-grained feature expression is obtained by using multiproportion block and reorganization. Thus, the global and local features are enhanced. The experimental results on Market-1501 dataset and DukeMTMC-reID dataset show that the indexes of the presented model, especially Rank-1 accuracy, reach 96.20% and 89.59%, respectively, which can be considered as a progress in Re-ID.


Author(s):  
Haitao Zhang ◽  
Jianmin Bao ◽  
Fei Ding ◽  
Guanyu Mi

Sign in / Sign up

Export Citation Format

Share Document