Deep context modeling for multi-turn response selection in dialogue systems

2021 ◽  
Vol 58 (1) ◽  
pp. 102415
Author(s):  
Lu Li ◽  
Chenliang Li ◽  
Donghong Ji
2019 ◽  
Author(s):  
Jiazhan Feng ◽  
Chongyang Tao ◽  
Wei Wu ◽  
Yansong Feng ◽  
Dongyan Zhao ◽  
...  

Author(s):  
Mingzhi Yu ◽  
Diane Litman

Retrieval-based dialogue systems select the best response from many candidates. Although many state-of-the-art models have shown promising performance in dialogue response selection tasks, there is still quite a gap between R@1 and R@10 performance. To address this, we propose to leverage linguistic coordination (a phenomenon that individuals tend to develop similar linguistic behaviors in conversation) to rerank the N-best candidates produced by BERT, a state-of-the-art pre-trained language model. Our results show an improvement in R@1 compared to BERT baselines, demonstrating the utility of repairing machine-generated outputs by leveraging a linguistic theory.


2018 ◽  
Author(s):  
Debanjan Chaudhuri ◽  
Agustinus Kristiadi ◽  
Jens Lehmann ◽  
Asja Fischer

2019 ◽  
Author(s):  
Matthew Henderson ◽  
Ivan Vulić ◽  
Daniela Gerz ◽  
Iñigo Casanueva ◽  
Paweł Budzianowski ◽  
...  

2021 ◽  
Vol 39 (4) ◽  
pp. 1-28
Author(s):  
Ruijian Xu ◽  
Chongyang Tao ◽  
Jiazhan Feng ◽  
Wei Wu ◽  
Rui Yan ◽  
...  

Building an intelligent dialogue system with the ability to select a proper response according to a multi-turn context is challenging in three aspects: (1) the meaning of a context–response pair is built upon language units from multiple granularities (e.g., words, phrases, and sub-sentences, etc.); (2) local (e.g., a small window around a word) and long-range (e.g., words across the context and the response) dependencies may exist in dialogue data; and (3) the relationship between the context and the response candidate lies in multiple relevant semantic clues or relatively implicit semantic clues in some real cases. However, existing approaches usually encode the dialogue with mono-type representation and the interaction processes between the context and the response candidate are executed in a rather shallow manner, which may lead to an inadequate understanding of dialogue content and hinder the recognition of the semantic relevance between the context and response. To tackle these challenges, we propose a representation [ K ] -interaction [ L ] -matching framework that explores multiple types of deep interactive representations to build context-response matching models for response selection. Particularly, we construct different types of representations for utterance–response pairs and deepen them via alternate encoding and interaction. By this means, the model can handle the relation of neighboring elements, phrasal pattern, and long-range dependencies during the representation and make a more accurate prediction through multiple layers of interactions between the context–response pair. Experiment results on three public benchmarks indicate that the proposed model significantly outperforms previous conventional context-response matching models and achieve slightly better results than the BERT model for multi-turn response selection in retrieval-based dialogue systems.


Sign in / Sign up

Export Citation Format

Share Document