scholarly journals Zero-shot Cross-lingual Dialogue Systems with Transferable Latent Variables

Author(s):  
Zihan Liu ◽  
Jamin Shin ◽  
Yan Xu ◽  
Genta Indra Winata ◽  
Peng Xu ◽  
...  
2019 ◽  
Author(s):  
Kristijan Gjoreski ◽  
Aleksandar Gjoreski ◽  
Ivan Kraljevski ◽  
Diane Hirschfeld

2020 ◽  
Vol 34 (05) ◽  
pp. 8433-8440
Author(s):  
Zihan Liu ◽  
Genta Indra Winata ◽  
Zhaojiang Lin ◽  
Peng Xu ◽  
Pascale Fung

Recently, data-driven task-oriented dialogue systems have achieved promising performance in English. However, developing dialogue systems that support low-resource languages remains a long-standing challenge due to the absence of high-quality data. In order to circumvent the expensive and time-consuming data collection, we introduce Attention-Informed Mixed-Language Training (MLT), a novel zero-shot adaptation method for cross-lingual task-oriented dialogue systems. It leverages very few task-related parallel word pairs to generate code-switching sentences for learning the inter-lingual semantics across languages. Instead of manually selecting the word pairs, we propose to extract source words based on the scores computed by the attention layer of a trained English task-related model and then generate word pairs using existing bilingual dictionaries. Furthermore, intensive experiments with different cross-lingual embeddings demonstrate the effectiveness of our approach. Finally, with very few word pairs, our model achieves significant zero-shot adaptation performance improvements in both cross-lingual dialogue state tracking and natural language understanding (i.e., intent detection and slot filling) tasks compared to the current state-of-the-art approaches, which utilize a much larger amount of bilingual data.


Author(s):  
Koji Inoue ◽  
Divesh Lala ◽  
Katsuya Takanashi ◽  
Tatsuya Kawahara

Engagement represents how much a user is interested in and willing to continue the current dialogue. Engagement recognition will provide an important clue for dialogue systems to generate adaptive behaviors for the user. This paper addresses engagement recognition based on multimodal listener behaviors of backchannels, laughing, head nodding, and eye gaze. In the annotation of engagement, the ground-truth data often differs from one annotator to another due to the subjectivity of the perception of engagement. To deal with this, we assume that each annotator has a latent character that affects his/her perception of engagement. We propose a hierarchical Bayesian model that estimates both engagement and the character of each annotator as latent variables. Furthermore, we integrate the engagement recognition model with automatic detection of the listener behaviors to realize online engagement recognition. Experimental results show that the proposed model improves recognition accuracy compared with other methods which do not consider the character such as majority voting. We also achieve online engagement recognition without degrading accuracy.


2021 ◽  
Vol 9 ◽  
pp. 410-428
Author(s):  
Edoardo M. Ponti ◽  
Ivan Vulić ◽  
Ryan Cotterell ◽  
Marinela Parovic ◽  
Roi Reichart ◽  
...  

Abstract Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task–language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task–language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods. Our code is available at github.com/cambridgeltl/parameter-factorization.


Author(s):  
Haiqin Yang ◽  
Xiaoyuan Yao ◽  
Yiqun Duan ◽  
Jianping Shen ◽  
Jie Zhong ◽  
...  

It is desirable to include more controllable attributes to enhance the diversity of generated responses in open-domain dialogue systems. However, existing methods can generate responses with only one controllable attribute or lack a flexible way to generate them with multiple controllable attributes. In this paper, we propose a Progressively trained Hierarchical Encoder-Decoder (PHED) to tackle this task. More specifically, PHED deploys Conditional Variational AutoEncoder (CVAE) on Transformer to include one aspect of attributes at one stage. A vital characteristic of the CVAE is to separate the latent variables at each stage into two types: a global variable capturing the common semantic features and a specific variable absorbing the attribute information at that stage. PHED then couples the CVAE latent variables with the Transformer encoder and is trained by minimizing a newly derived ELBO and controlled losses to produce the next stage's input and produce responses as required. Finally, we conduct extensive evaluations to show that PHED significantly outperforms the state-of-the-art neural generation models and produces more diverse responses as expected.


Author(s):  
Lu Xiang ◽  
Junnan Zhu ◽  
Yang Zhao ◽  
Yu Zhou ◽  
Chengqing Zong

Cross-lingual dialogue systems are increasingly important in e-commerce and customer service due to the rapid progress of globalization. In real-world system deployment, machine translation (MT) services are often used before and after the dialogue system to bridge different languages. However, noises and errors introduced in the MT process will result in the dialogue system's low robustness, making the system's performance far from satisfactory. In this article, we propose a novel MT-oriented noise enhanced framework that exploits multi-granularity MT noises and injects such noises into the dialogue system to improve the dialogue system's robustness. Specifically, we first design a method to automatically construct multi-granularity MT-oriented noises and multi-granularity adversarial examples, which contain abundant noise knowledge oriented to MT. Then, we propose two strategies to incorporate the noise knowledge: (i) Utterance-level adversarial learning and (ii) Knowledge-level guided method. The former adopts adversarial learning to learn a perturbation-invariant encoder, guiding the dialogue system to learn noise-independent hidden representations. The latter explicitly incorporates the multi-granularity noises, which contain the noise tokens and their possible correct forms, into the training and inference process, thus improving the dialogue system's robustness. Experimental results on three dialogue models, two dialogue datasets, and two language pairs have shown that the proposed framework significantly improves the performance of the cross-lingual dialogue system.


Methodology ◽  
2011 ◽  
Vol 7 (4) ◽  
pp. 157-164
Author(s):  
Karl Schweizer

Probability-based and measurement-related hypotheses for confirmatory factor analysis of repeated-measures data are investigated. Such hypotheses comprise precise assumptions concerning the relationships among the true components associated with the levels of the design or the items of the measure. Measurement-related hypotheses concentrate on the assumed processes, as, for example, transformation and memory processes, and represent treatment-dependent differences in processing. In contrast, probability-based hypotheses provide the opportunity to consider probabilities as outcome predictions that summarize the effects of various influences. The prediction of performance guided by inexact cues serves as an example. In the empirical part of this paper probability-based and measurement-related hypotheses are applied to working-memory data. Latent variables according to both hypotheses contribute to a good model fit. The best model fit is achieved for the model including latent variables that represented serial cognitive processing and performance according to inexact cues in combination with a latent variable for subsidiary processes.


2019 ◽  
Vol 50 (1) ◽  
pp. 24-37
Author(s):  
Ben Porter ◽  
Camilla S. Øverup ◽  
Julie A. Brunson ◽  
Paras D. Mehta

Abstract. Meta-accuracy and perceptions of reciprocity can be measured by covariances between latent variables in two social relations models examining perception and meta-perception. We propose a single unified model called the Perception-Meta-Perception Social Relations Model (PM-SRM). This model simultaneously estimates all possible parameters to provide a more complete understanding of the relationships between perception and meta-perception. We describe the components of the PM-SRM and present two pedagogical examples with code, openly available on https://osf.io/4ag5m . Using a new package in R (xxM), we estimated the model using multilevel structural equation modeling which provides an approachable and flexible framework for evaluating the PM-SRM. Further, we discuss possible expansions to the PM-SRM which can explore novel and exciting hypotheses.


2002 ◽  
Author(s):  
Gustavo Mazcorro-Tellez ◽  
Servio T. Guillen Burguete
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document