A Case study of Edutainment Robot: Applying Voice Question Answering to Intelligent Robot

Author(s):  
Hyo-Jung Oh ◽  
Chung-Hee Lee ◽  
Yi-Gyu Hwang ◽  
Myung-Gil Jang ◽  
Jeon Gue Park ◽  
...  
2021 ◽  
Author(s):  
Paulo Bala ◽  
Valentina Nisi ◽  
Mara Dionisio ◽  
Nuno Jardim Nunes ◽  
Stuart James

AI Magazine ◽  
2016 ◽  
Vol 37 (1) ◽  
pp. 63-72 ◽  
Author(s):  
C. Lawrence Zitnick ◽  
Aishwarya Agrawal ◽  
Stanislaw Antol ◽  
Margaret Mitchell ◽  
Dhruv Batra ◽  
...  

As machines have become more intelligent, there has been a renewed interest in methods for measuring their intelligence. A common approach is to propose tasks for which a human excels, but one which machines find difficult. However, an ideal task should also be easy to evaluate and not be easily gameable. We begin with a case study exploring the recently popular task of image captioning and its limitations as a task for measuring machine intelligence. An alternative and more promising task is Visual Question Answering that tests a machine’s ability to reason about language and vision. We describe a dataset unprecedented in size created for the task that contains over 760,000 human generated questions about images. Using around 10 million human generated answers, machines may be easily evaluated.


2020 ◽  
Vol 8 ◽  
pp. 759-775
Author(s):  
Edwin Simpson ◽  
Yang Gao ◽  
Iryna Gurevych

For many NLP applications, such as question answering and summarization, the goal is to select the best solution from a large space of candidates to meet a particular user’s needs. To address the lack of user or task-specific training data, we propose an interactive text ranking approach that actively selects pairs of candidates, from which the user selects the best. Unlike previous strategies, which attempt to learn a ranking across the whole candidate space, our method uses Bayesian optimization to focus the user’s labeling effort on high quality candidates and integrate prior knowledge to cope better with small data scenarios. We apply our method to community question answering (cQA) and extractive multidocument summarization, finding that it significantly outperforms existing interactive approaches. We also show that the ranking function learned by our method is an effective reward function for reinforcement learning, which improves the state of the art for interactive summarization.


2019 ◽  
Vol 56 (1) ◽  
pp. 58-67
Author(s):  
Anubrata Das ◽  
Samreen Anjum ◽  
Danna Gurari

Author(s):  
José Antonio Robles-Flores ◽  
Gregory Schymik ◽  
Julie Smith-David ◽  
Robert St. Louis

Web search engines typically retrieve a large number of web pages and overload business analysts with irrelevant information. One approach that has been proposed for overcoming some of these problems is automated Question Answering (QA). This paper describes a case study that was designed to determine the efficacy of QA systems for generating answers to original, fusion, list questions (questions that have not previously been asked and answered, questions for which the answer cannot be found on a single web site, and questions for which the answer is a list of items). Results indicate that QA algorithms are not very good at producing complete answer lists and that searchers are not very good at constructing answer lists from snippets. These findings indicate a need for QA research to focus on crowd sourcing answer lists and improving output format.


2001 ◽  
Vol 7 (4) ◽  
pp. 301-323 ◽  
Author(s):  
S. BUCHHOLZ ◽  
W. DAELEMANS

We investigate the problem of complex answers in question answering. Complex answers consist of several simple answers. We describe the online question answering system SHAPAQA, and using data from this system we show that the problem of complex answers is quite common. We define nine types of complex questions, and suggest two approaches, based on answer frequencies, that allow question answering systems to tackle the problem.


2017 ◽  
Vol 112 ◽  
pp. 622-631 ◽  
Author(s):  
Dorra Attiaoui ◽  
Arnaud Martin ◽  
Boutheina Ben Yaghlane

Author(s):  
Yining Hong ◽  
Jialu Wang ◽  
Yuting Jia ◽  
Weinan Zhang ◽  
Xinbing Wang

We present Academic Reader, a system which can read academic literatures and answer the relevant questions for researchers. Academic Reader leverages machine reading comprehension technique, which has been successfully applied in many fields but has not been involved in academic literature reading. An interactive platform is established to demonstrate the functions of Academic Reader. Pieces of academic literature and relevant questions are input to our system, which then outputs answers. The system can also gather users’ revised answers and perform active learning to continuously improve its performance. A case study is provided presenting the performance of our system on all papers accepted in KDD 2018, which demonstrates how our system facilitates massive academic literature reading.


Sign in / Sign up

Export Citation Format

Share Document