scholarly journals Enhancing Performance with a Learnable Strategy for Multiple Question Answering Modules

ETRI Journal ◽  
2009 ◽  
Vol 31 (4) ◽  
pp. 419-428 ◽  
Author(s):  
Hyo-Jung Oh ◽  
Sung Hyon Myaeng ◽  
Myung-Gil Jang
2018 ◽  
Vol 7 (2) ◽  
pp. 275-308 ◽  
Author(s):  
Kristen Olson ◽  
Jolene D Smyth ◽  
Amanda Ganshert

Abstract In a standardized telephone interview, respondents ideally are able to provide an answer that easily fits the response task. Deviations from this ideal question answering behavior are behavioral manifestations of breakdowns in the cognitive response process and partially reveal mechanisms underlying measurement error, but little is known about what question characteristics or types of respondents are associated with what types of deviations. Evaluations of question problems tend to look at one question characteristic at a time; yet questions are comprised of multiple characteristics, some of which are easier to experimentally manipulate (e.g., presence of a definition) than others (e.g., attitude versus behavior). All of these characteristics can affect how respondents answer questions. Using a landline telephone interview, we use cross-classified random effects logistic regression models to simultaneously evaluate the effects of multiple question and respondent characteristics on six different respondent behaviors. We find that most of the variability in these respondent answering behaviors is associated with the questions rather than the respondents themselves. Question characteristics that affect the comprehension and mapping stages of the cognitive response process are consistently associated with answering behaviors, whereas attitude questions do not consistently differ from behavioral questions. We also find that sensitive questions are more likely to yield adequate answers and fewer problems in reporting or clarification requests than nonsensitive questions. Additionally, older respondents are less likely to answer adequately. Our findings suggest that survey designers should focus on questionnaire features related to comprehension and mapping to minimize interactional and data quality problems in surveys and should train interviewers on how to resolve these reporting problems.


AI Magazine ◽  
2019 ◽  
Vol 40 (3) ◽  
pp. 67-78
Author(s):  
Guy Barash ◽  
Mauricio Castillo-Effen ◽  
Niyati Chhaya ◽  
Peter Clark ◽  
Huáscar Espinoza ◽  
...  

The workshop program of the Association for the Advancement of Artificial Intelligence’s 33rd Conference on Artificial Intelligence (AAAI-19) was held in Honolulu, Hawaii, on Sunday and Monday, January 27–28, 2019. There were fifteen workshops in the program: Affective Content Analysis: Modeling Affect-in-Action, Agile Robotics for Industrial Automation Competition, Artificial Intelligence for Cyber Security, Artificial Intelligence Safety, Dialog System Technology Challenge, Engineering Dependable and Secure Machine Learning Systems, Games and Simulations for Artificial Intelligence, Health Intelligence, Knowledge Extraction from Games, Network Interpretability for Deep Learning, Plan, Activity, and Intent Recognition, Reasoning and Learning for Human-Machine Dialogues, Reasoning for Complex Question Answering, Recommender Systems Meet Natural Language Processing, Reinforcement Learning in Games, and Reproducible AI. This report contains brief summaries of the all the workshops that were held.


Author(s):  
Ulf Hermjakob ◽  
Eduard Hovy ◽  
Chin-Yew Lin
Keyword(s):  

2018 ◽  
Vol 10 (1) ◽  
pp. 57-64 ◽  
Author(s):  
Rizqa Raaiqa Bintana ◽  
Chastine Fatichah ◽  
Diana Purwitasari

Community-based question answering (CQA) is formed to help people who search information that they need through a community. One condition that may occurs in CQA is when people cannot obtain the information that they need, thus they will post a new question. This condition can cause CQA archive increased because of duplicated questions. Therefore, it becomes important problems to find semantically similar questions from CQA archive towards a new question. In this study, we use convolutional neural network methods for semantic modeling of sentence to obtain words that they represent the content of documents and new question. The result for the process of finding the same question semantically to a new question (query) from the question-answer documents archive using the convolutional neural network method, obtained the mean average precision value is 0,422. Whereas by using vector space model, as a comparison, obtained mean average precision value is 0,282. Index Terms—community-based question answering, convolutional neural network, question retrieval


Sign in / Sign up

Export Citation Format

Share Document