scholarly journals Evaluating active learning methods for annotating semantic predications

JAMIA Open ◽  
2018 ◽  
Vol 1 (2) ◽  
pp. 275-282 ◽  
Author(s):  
Jake Vasilakes ◽  
Rubina Rizvi ◽  
Genevieve B Melton ◽  
Serguei Pakhomov ◽  
Rui Zhang

Abstract Objectives This study evaluated and compared a variety of active learning strategies, including a novel strategy we proposed, as applied to the task of filtering incorrect semantic predications in SemMedDB. Materials and methods We evaluated 8 active learning strategies covering 3 types—uncertainty, representative, and combined—on 2 datasets of 6,000 total semantic predications from SemMedDB covering the domains of substance interactions and clinical medicine, respectively. We also designed a novel combined strategy called dynamic β that does not use hand-tuned hyperparameters. Each strategy was assessed by the Area under the Learning Curve (ALC) and the number of training examples required to achieve a target Area Under the ROC curve. We also visualized and compared the query patterns of the query strategies. Results All types of active learning (AL) methods beat the baseline on both datasets. Combined strategies outperformed all other methods in terms of ALC, outperforming the baseline by over 0.05 ALC for both datasets and reducing 58% annotation efforts in the best case. While representative strategies performed well, their performance was matched or outperformed by the combined methods. Our proposed AL method dynamic β shows promising ability to achieve near-optimal performance across 2 datasets. Discussion Our visual analysis of query patterns indicates that strategies which efficiently obtain a representative subsample perform better on this task. Conclusion Active learning is shown to be effective at reducing annotation costs for filtering incorrect semantic predications from SemMedDB. Our proposed AL method demonstrated promising performance.

Author(s):  
Zehong Hu ◽  
Jie Zhang

Active learning strategies are often used in crowd labeling to improve task assignment. However, these strategies require prohibitive computation time yet still cannot improve the assignment to the utmost, because they simply evaluate each possible assignment and then greedily select the optimal one. In this paper, we first derive an efficient algorithm for assignment evaluation. Then, to overcome the uncertainty of labels, we develop a novel strategy that modulates the scope of the greedy task assignment with posterior uncertainty and keeps the evaluation optimistic. The experiments on two popular worker models and four MTurk datasets show that our strategy achieves the best performance and highest computation efficiency.


2019 ◽  
Vol 049 (01) ◽  
Author(s):  
Linda Strubbe ◽  
Jared Stang ◽  
Tara Holland ◽  
Sarah Bean Sherman ◽  
Warren Code

2019 ◽  
Author(s):  
Kalyca N. Spinler ◽  
◽  
René A. Shroat-Lewis ◽  
Michael T. DeAngelis

2000 ◽  
Vol 24 (1) ◽  
pp. 30-37 ◽  
Author(s):  
J R Moy ◽  
D W Rodenbaugh ◽  
H L Collins ◽  
S E DiCarlo

Traditional review sessions are typically focused on instructor-based learning. However, experts in the field of higher education have long recommended teaching modalities that incorporate student-based active-learning strategies. Given this, we developed an educational game in pulmonary physiology for first-year medical students based loosely on the popular television game show Who Wants To Be A Millionaire. The purpose of our game, Who Wants To Be A Physician, was to provide students with an educational tool by which to review material previously presented in class. Our goal in designing this game was to encourage students to be active participants in their own learning process. The Who Wants To Be A Physician game was constructed in the form of a manual consisting of a bank of questions in various areas of pulmonary physiology: basic concepts, pulmonary mechanics, ventilation, pulmonary blood flow, pulmonary gas exchange, gas transport, and control of ventilation. Detailed answers are included in the manual to assist the instructor or player in comprehension of the material. In addition, an evaluation instrument was used to assess the effectiveness of this instructional tool in an academic setting. Specifically, the evaluation instrument addressed five major components, including goals and objectives, participation, content, components and organization, and summary and recommendations. Students responded positively to our game and the concept of active learning. Moreover, we are confident that this educational tool has enhanced the students' learning process and their ability to understand and retain information.


Sign in / Sign up

Export Citation Format

Share Document