scholarly journals Belief Measure of Expertise for Experts Detection in Question Answering Communities: case study Stack Overflow

2017 ◽  
Vol 112 ◽  
pp. 622-631 ◽  
Author(s):  
Dorra Attiaoui ◽  
Arnaud Martin ◽  
Boutheina Ben Yaghlane
2021 ◽  
Author(s):  
Paulo Bala ◽  
Valentina Nisi ◽  
Mara Dionisio ◽  
Nuno Jardim Nunes ◽  
Stuart James

2020 ◽  
Vol 46 (9) ◽  
pp. 1024-1038 ◽  
Author(s):  
Shaowei Wang ◽  
Tse-Hsun Chen ◽  
Ahmed E. Hassan
Keyword(s):  

AI Magazine ◽  
2016 ◽  
Vol 37 (1) ◽  
pp. 63-72 ◽  
Author(s):  
C. Lawrence Zitnick ◽  
Aishwarya Agrawal ◽  
Stanislaw Antol ◽  
Margaret Mitchell ◽  
Dhruv Batra ◽  
...  

As machines have become more intelligent, there has been a renewed interest in methods for measuring their intelligence. A common approach is to propose tasks for which a human excels, but one which machines find difficult. However, an ideal task should also be easy to evaluate and not be easily gameable. We begin with a case study exploring the recently popular task of image captioning and its limitations as a task for measuring machine intelligence. An alternative and more promising task is Visual Question Answering that tests a machine’s ability to reason about language and vision. We describe a dataset unprecedented in size created for the task that contains over 760,000 human generated questions about images. Using around 10 million human generated answers, machines may be easily evaluated.


2020 ◽  
Vol 8 ◽  
pp. 759-775
Author(s):  
Edwin Simpson ◽  
Yang Gao ◽  
Iryna Gurevych

For many NLP applications, such as question answering and summarization, the goal is to select the best solution from a large space of candidates to meet a particular user’s needs. To address the lack of user or task-specific training data, we propose an interactive text ranking approach that actively selects pairs of candidates, from which the user selects the best. Unlike previous strategies, which attempt to learn a ranking across the whole candidate space, our method uses Bayesian optimization to focus the user’s labeling effort on high quality candidates and integrate prior knowledge to cope better with small data scenarios. We apply our method to community question answering (cQA) and extractive multidocument summarization, finding that it significantly outperforms existing interactive approaches. We also show that the ranking function learned by our method is an effective reward function for reinforcement learning, which improves the state of the art for interactive summarization.


2021 ◽  
Author(s):  
Markus Nivala ◽  
Alena Seredko ◽  
Tanya Osborne ◽  
Thomas Hillman

The purpose of this study is to examine if, and to what extent, online Community Question Answering platforms expand the opportunities for professional development in programming. Longitudinal and cross-sectional analyses of Stack Overflow Developer Surveys were used to examine users' geographical distribution, gender, experience, professional status, platform usage and education. In order to study differences between the countries with the largest number of respondents, the developer survey data was combined with indicators of human development, gender equality and educational attainment. The results show that the Stack Overflow community has expanded to some extent, both in terms of wider geographical distribution and the programming expertise of users. However, the community reflects and fails to mitigate the apparent gender disparity in the field of programming. Furthermore, participation seems to be conditioned by formal education, especially in developing countries. In general, participation patterns in Stack Overflow seem to be heavily influenced by local conditions in different countries.


2019 ◽  
Vol 56 (1) ◽  
pp. 58-67
Author(s):  
Anubrata Das ◽  
Samreen Anjum ◽  
Danna Gurari

Author(s):  
José Antonio Robles-Flores ◽  
Gregory Schymik ◽  
Julie Smith-David ◽  
Robert St. Louis

Web search engines typically retrieve a large number of web pages and overload business analysts with irrelevant information. One approach that has been proposed for overcoming some of these problems is automated Question Answering (QA). This paper describes a case study that was designed to determine the efficacy of QA systems for generating answers to original, fusion, list questions (questions that have not previously been asked and answered, questions for which the answer cannot be found on a single web site, and questions for which the answer is a list of items). Results indicate that QA algorithms are not very good at producing complete answer lists and that searchers are not very good at constructing answer lists from snippets. These findings indicate a need for QA research to focus on crowd sourcing answer lists and improving output format.


Sign in / Sign up

Export Citation Format

Share Document