Overview of the NLPCC 2018 Shared Task: Spoken Language Understanding in Task-Oriented Dialog Systems

Author(s):  
Xuemin Zhao ◽  
Yunbo Cao
2020 ◽  
Vol 12 (4) ◽  
pp. 32-43
Author(s):  
Xin Liu ◽  
RuiHua Qi ◽  
Lin Shao

Intent determination (ID) and slot filling (SF) are two critical steps in the spoken language understanding (SLU) task. Conventionally, most previous work has been done for each subtask respectively. To exploit the dependencies between intent label and slot sequence, as well as deal with both tasks simultaneously, this paper proposes a joint model (ABLCJ), which is trained by a united loss function. In order to utilize both past and future input features efficiently, a joint model based Bi-LSTM with contextual information is employed to learn the representation of each step, which are shared by two tasks and the model. This paper also uses sentence-level tag information learned from a CRF layer to predict the tag of each slot. Meanwhile, a submodule-based attention is employed to capture global features of a sentence for intent classification. The experimental results demonstrate that ABLCJ achieves competitive performance in the Shared Task 4 of NLPCC 2018.


2019 ◽  
Vol 1267 ◽  
pp. 012023
Author(s):  
Lixian Hou ◽  
Yanling Li ◽  
Chengcheng Li ◽  
Min Lin

1991 ◽  
Author(s):  
Lynette Hirschman ◽  
Stephanie Seneff ◽  
David Goodine ◽  
Michael Phillips

2020 ◽  
Author(s):  
Saad Ghojaria ◽  
Rahul Kotian ◽  
Yash Sawant ◽  
Suresh Mestry

Sign in / Sign up

Export Citation Format

Share Document