Data-Driven Estimation of Backward Reachable and Invariant Sets for Unmodeled Systems via Active Learning

Author(s):  
Ankush Chakrabarty ◽  
Arvind Raghunathan ◽  
Stefano Di Cairano ◽  
Claus Danielson
2019 ◽  
Vol 35 (5) ◽  
pp. 1071-1083 ◽  
Author(s):  
Ian Abraham ◽  
Todd D. Murphey
Keyword(s):  

2022 ◽  
Vol 163 ◽  
pp. 108106
Author(s):  
Jingwen Song ◽  
Pengfei Wei ◽  
Marcos A. Valdebenito ◽  
Matthias Faes ◽  
Michael Beer

2020 ◽  
Vol 34 (09) ◽  
pp. 13622-13623
Author(s):  
Zhaojiang Lin ◽  
Peng Xu ◽  
Genta Indra Winata ◽  
Farhad Bin Siddique ◽  
Zihan Liu ◽  
...  

We present CAiRE, an end-to-end generative empathetic chatbot designed to recognize user emotions and respond in an empathetic manner. Our system adapts the Generative Pre-trained Transformer (GPT) to empathetic response generation task via transfer learning. CAiRE is built primarily to focus on empathy integration in fully data-driven generative dialogue systems. We create a web-based user interface which allows multiple users to asynchronously chat with CAiRE. CAiRE also collects user feedback and continues to improve its response quality by discarding undesirable generations via active learning and negative training.


2021 ◽  
Vol 125 ◽  
pp. 101360
Author(s):  
Jorge Chang ◽  
Jiseob Kim ◽  
Byoung-Tak Zhang ◽  
Mark A. Pitt ◽  
Jay I. Myung

2022 ◽  
Author(s):  
Venkata Vaishnav Tadiparthi ◽  
Raktim Bhattacharya
Keyword(s):  

Author(s):  
Guirong Bai ◽  
Shizhu He ◽  
Kang Liu ◽  
Jun Zhao

Active learning is an effective method to substantially alleviate the problem of expensive annotation cost for data-driven models. Recently, pre-trained language models have been demonstrated to be powerful for learning language representations. In this article, we demonstrate that the pre-trained language model can also utilize its learned textual characteristics to enrich criteria of active learning. Specifically, we provide extra textual criteria with the pre-trained language model to measure instances, including noise, coverage, and diversity. With these extra textual criteria, we can select more efficient instances for annotation and obtain better results. We conduct experiments on both English and Chinese sentence matching datasets. The experimental results show that the proposed active learning approach can be enhanced by the pre-trained language model and obtain better performance.


Sign in / Sign up

Export Citation Format

Share Document