scholarly journals A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems

Author(s):  
San Kim ◽  
Jin Yea Jang ◽  
Minyoung Jung ◽  
Saim Shin
2019 ◽  
Author(s):  
Jiahua Liu ◽  
Yankai Lin ◽  
Zhiyuan Liu ◽  
Maosong Sun

2019 ◽  
Author(s):  
Kristijan Gjoreski ◽  
Aleksandar Gjoreski ◽  
Ivan Kraljevski ◽  
Diane Hirschfeld

2020 ◽  
Vol 34 (05) ◽  
pp. 9169-9176
Author(s):  
Jian Wang ◽  
Junhao Liu ◽  
Wei Bi ◽  
Xiaojiang Liu ◽  
Kejing He ◽  
...  

Neural network models usually suffer from the challenge of incorporating commonsense knowledge into the open-domain dialogue systems. In this paper, we propose a novel knowledge-aware dialogue generation model (called TransDG), which transfers question representation and knowledge matching abilities from knowledge base question answering (KBQA) task to facilitate the utterance understanding and factual knowledge selection for dialogue generation. In addition, we propose a response guiding attention and a multi-step decoding strategy to steer our model to focus on relevant features for response generation. Experiments on two benchmark datasets demonstrate that our model has robust superiority over compared methods in generating informative and fluent dialogues. Our code is available at https://github.com/siat-nlp/TransDG.


2020 ◽  
Author(s):  
Lishan Huang ◽  
Zheng Ye ◽  
Jinghui Qin ◽  
Liang Lin ◽  
Xiaodan Liang
Keyword(s):  

2021 ◽  
Vol 9 ◽  
pp. 1389-1406
Author(s):  
Shayne Longpre ◽  
Yi Lu ◽  
Joachim Daiber

Abstract Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets. We introduce Multilingual Knowledge Questions and Answers (MKQA), an open- domain question answering evaluation set comprising 10k question-answer pairs aligned across 26 typologically diverse languages (260k question-answer pairs in total). Answers are based on heavily curated, language- independent data representation, making results comparable across languages and independent of language-specific passages. With 26 languages, this dataset supplies the widest range of languages to-date for evaluating question answering. We benchmark a variety of state- of-the-art methods and baselines for generative and extractive question answering, trained on Natural Questions, in zero shot and translation settings. Results indicate this dataset is challenging even in English, but especially in low-resource languages.1


2019 ◽  
Author(s):  
Zhufeng Pan ◽  
Kun Bai ◽  
Yan Wang ◽  
Lianqiang Zhou ◽  
Xiaojiang Liu
Keyword(s):  

2016 ◽  
Vol 31 (1) ◽  
pp. DSF-F_1-9
Author(s):  
Michimasa Inaba ◽  
Yuka Yoshino ◽  
Kenichi Takahashi
Keyword(s):  

2019 ◽  
Author(s):  
Zihan Liu ◽  
Jamin Shin ◽  
Yan Xu ◽  
Genta Indra Winata ◽  
Peng Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document