Korean Text Summarization using MASS with Relative Position Representation

2020 ◽  
Vol 47 (9) ◽  
pp. 873-878
Author(s):  
Youngjun Jung ◽  
Hyunsun Hwang ◽  
Changki Lee
2021 ◽  
Vol 40 (5) ◽  
pp. 10003-10015
Author(s):  
Zibang Gan ◽  
Biqing Zeng ◽  
Lianglun Cheng ◽  
Shuai Liu ◽  
Heng Yang ◽  
...  

In multi-turn dialogue generation, dialogue contexts have been shown to have an important influence on the reasoning of the next round of dialogue. A multi-turn dialogue between two people should be able to give a reasonable response according to the relevant context. However, the widely used hierarchical recurrent encoder-decoder model and the latest model that detecting the relevant contexts with self-attention are facing the same problem. Their given response doesn’t match the identity of the current speaker, which we call it role ambiguity. In this paper, we propose a new model, named RoRePo, to tackle this problem by detecting the role information and relative position information. Firstly, as a part of the decoder input, we add a role embedding to identity different speakers. Secondly, we incorporate self-attention mechanism with relative position representation to dialogue context understanding. Besides, the design of our model architecture considers the influence of latent variables in generating more diverse responses. Experimental results of our evaluations on the DailyDialog and DSTC7_AVSD datasets show that our proposed model advances in multi-turn dialogue generation.


2021 ◽  
Vol 48 (6) ◽  
pp. 688-695
Author(s):  
Su-Hwan Yoon ◽  
A-Yeong Kim ◽  
Seong-Bae Park

2020 ◽  
Vol 47 (11) ◽  
pp. 1038-1043
Author(s):  
Youngjun Jung ◽  
Cheoneum Park ◽  
Changki Lee ◽  
Junseok Kim

Sign in / Sign up

Export Citation Format

Share Document