scholarly journals Dataset bias: A case study for visual question answering

2019 ◽  
Vol 56 (1) ◽  
pp. 58-67
Author(s):  
Anubrata Das ◽  
Samreen Anjum ◽  
Danna Gurari
2021 ◽  
Author(s):  
Paulo Bala ◽  
Valentina Nisi ◽  
Mara Dionisio ◽  
Nuno Jardim Nunes ◽  
Stuart James

AI Magazine ◽  
2016 ◽  
Vol 37 (1) ◽  
pp. 63-72 ◽  
Author(s):  
C. Lawrence Zitnick ◽  
Aishwarya Agrawal ◽  
Stanislaw Antol ◽  
Margaret Mitchell ◽  
Dhruv Batra ◽  
...  

As machines have become more intelligent, there has been a renewed interest in methods for measuring their intelligence. A common approach is to propose tasks for which a human excels, but one which machines find difficult. However, an ideal task should also be easy to evaluate and not be easily gameable. We begin with a case study exploring the recently popular task of image captioning and its limitations as a task for measuring machine intelligence. An alternative and more promising task is Visual Question Answering that tests a machine’s ability to reason about language and vision. We describe a dataset unprecedented in size created for the task that contains over 760,000 human generated questions about images. Using around 10 million human generated answers, machines may be easily evaluated.


2021 ◽  
Author(s):  
Dezhi Han ◽  
Shuli Zhou ◽  
Kuan Ching Li ◽  
Rodrigo Fernandes de Mello

Sign in / Sign up

Export Citation Format

Share Document