scholarly journals A Survey on Spoken Language Understanding: Recent Advances and New Frontiers

Author(s):  
Libo Qin ◽  
Tianbao Xie ◽  
Wanxiang Che ◽  
Ting Liu

Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries, which is a core component in a task-oriented dialog system. With the burst of deep neural networks and the evolution of pre-trained language models, the research of SLU has obtained significant breakthroughs. However, there remains a lack of a comprehensive survey summarizing existing approaches and recent trends, which motivated the work presented in this article. In this paper, we survey recent advances and new frontiers in SLU. Specifically, we give a thorough review of this research field, covering different aspects including (1) new taxonomy: we provide a new perspective for SLU filed, including single model vs. joint model, implicit joint modeling vs. explicit joint modeling in joint model, non pre-trained paradigm vs. pretrained paradigm; (2) new frontiers: some emerging areas in complex SLU as well as the corresponding challenges; (3) abundant open-source resources: to help the community, we have collected, organized the related papers, baseline projects and leaderboard on a public website where SLU researchers could directly access to the recent progress. We hope that this survey can shed a light on future research in SLU field.

Author(s):  
Natalia Tomashenko ◽  
Antoine Caubrière ◽  
Yannick Estève ◽  
Antoine Laurent ◽  
Emmanuel Morin

2019 ◽  
Author(s):  
Ryo Masumura ◽  
Tomohiro Tanaka ◽  
Atsushi Ando ◽  
Hosana Kamiyama ◽  
Takanobu Oba ◽  
...  

2020 ◽  
Vol 12 (4) ◽  
pp. 32-43
Author(s):  
Xin Liu ◽  
RuiHua Qi ◽  
Lin Shao

Intent determination (ID) and slot filling (SF) are two critical steps in the spoken language understanding (SLU) task. Conventionally, most previous work has been done for each subtask respectively. To exploit the dependencies between intent label and slot sequence, as well as deal with both tasks simultaneously, this paper proposes a joint model (ABLCJ), which is trained by a united loss function. In order to utilize both past and future input features efficiently, a joint model based Bi-LSTM with contextual information is employed to learn the representation of each step, which are shared by two tasks and the model. This paper also uses sentence-level tag information learned from a CRF layer to predict the tag of each slot. Meanwhile, a submodule-based attention is employed to capture global features of a sentence for intent classification. The experimental results demonstrate that ABLCJ achieves competitive performance in the Shared Task 4 of NLPCC 2018.


2013 ◽  
Vol 21 (11) ◽  
pp. 2451-2464 ◽  
Author(s):  
Donghyeon Lee ◽  
Minwoo Jeong ◽  
Kyungduk Kim ◽  
Seonghan Ryu ◽  
Gary Geunbae Lee

1991 ◽  
Author(s):  
Lynette Hirschman ◽  
Stephanie Seneff ◽  
David Goodine ◽  
Michael Phillips

2020 ◽  
Author(s):  
Saad Ghojaria ◽  
Rahul Kotian ◽  
Yash Sawant ◽  
Suresh Mestry

Sign in / Sign up

Export Citation Format

Share Document