scholarly journals Incorrect Answer in CME Online Quiz Questions

2021 ◽  
Vol 181 (4) ◽  
pp. 570
Keyword(s):  
Author(s):  
Kelly Cline ◽  
Holly Zullo ◽  
David A Huckaby

Abstract Common student errors and misconceptions can be addressed through the method of classroom voting, in which the instructor presents a multiple-choice question to the class, and after a few minutes for consideration and small-group discussion, each student votes on the correct answer, using a clicker or a phone. If a large number of students have voted for one particular incorrect answer, the instructor can recognize and address the issue. In order to identify multiple-choice questions that are especially effective at provoking common errors and misconceptions, we recorded the percentages of students voting for each option on each question used in 25 sections of integral calculus, taught by 7 instructors, at 4 institutions, over the course of 12 years, on a collection of 172 questions. We restricted our analysis to the 115 questions which were voted on by at least 5 different classes. We present the six questions that caused the largest percentages of students to vote for a particular incorrect answer, discuss how we used these questions in the classroom, and examine the common features of these questions. Further, we look for correlations between question characteristics and the mean percentage of students voting for common errors on these questions, and we find that questions based on general cases have higher percentages of students voting for common errors.


Author(s):  
N. D. Setyani ◽  
Cari Cari ◽  
Suparmi Suparmi ◽  
J. Handhika

<p class="Abstract">Newton’s law is a foundamental concept that needs to be studied and understood correctly. Concept presentation in different representation will help the student to understand the concept that being learned. Student’s ability to present Newton’s law in different representation indicate the quality of student’s concept ability. This research aims to describe student’s concept ability of Newton’s laws based on the student’s ability of verbal and visual (pictorial and graphical) problem solving. The method of this research  is qualitative with the sample of 71 students of physics education from IKIP PGRI Madiun (14 students) and Sebelas Maret University (57 students). The instrument used in this research were conceptual test and  interview. The result showed that more student provide incorrect answer to the physics conceptual problem. Percentage of the incorrect answer for First Newton’s law problem is 69 %, Second Newton’s law problem is 71 %, and Third Newton’s law problem is 76 %. The  students do not understand the language of physics correctly, they undergo incorrect physics concept, and so they only understand few physics concept of Newton’s law.</p>


1988 ◽  
Vol 16 (3) ◽  
pp. 299-301 ◽  
Author(s):  
J. J. O'Leary ◽  
B. J. Pollard ◽  
M. J. Ryan

A method of testing the location of an endotracheal tube, in the trachea or oesophagus, was subjected to trial. The test involves drawing back on the plunger of a 50 ml syringe connected with airtight fittings to the endotracheal tube connector, with the endotracheal tube cuff deflated. The ability to withdraw 30 ml of air confirms tracheal intubation. When marked resistance to withdrawal of the plunger occurs and on release the plunger rebounds to its original position the oesophagus has been intubated. The method was 100% accurate in fifty intubations, 25 tracheal and 25 oesophageal. The technique has been in routine use by one author for several years without giving an incorrect answer and enthusiastic use by other authors is producing the same result.


2008 ◽  
Vol 12 (1) ◽  
pp. 55-69 ◽  
Author(s):  
Soile Loukusa ◽  
Eeva Leinonen

Development of comprehension of ironic utterances in 3- to 9-year-old Finnish-speaking children This study explores the comprehension of simple ironic utterances in 210 Finnish children aged from 3 to 9 years. If the child answered the question correctly, he/she was asked to explain correct answers. The results indicated that there was large individual variation within age groups both in answers and explanations. In terms of correct answers there was a significant difference between 6- and 7-year-olds and in correct explanations between age groups of 3-4, 6-7 and 7-8. Analysis of incorrect answers showed that literal interpretation of an utterance was the most common incorrect answer type in all age groups. Totally irrelevant answers occurred only in children aged 3 and 4. In terms of incorrect explanations, "turntaking" and "incorrect focus" categories were the most common incorrect explanation types. Contrary to previous studies, in this study already some of the 3- and 4-year-olds showed an emerging ability to comprehend irony.


2007 ◽  
Vol 12 (5) ◽  
pp. 238-242
Author(s):  
Kathy Hawes

In many mathematics classrooms, students are responsible for correcting their own homework. Placing a red X next to an incorrect answer is often the extent of homework checking because students often lack the ability to find and correct mistakes in their own work. Without this skill, students lose a valuable opportunity to discuss homework in a meaningful way.


2012 ◽  
Vol 586 ◽  
pp. 241-246
Author(s):  
Li Min Li ◽  
Zhong Sheng Wang

When diagnosing sudden mechanical failure, in order to make the result of classification more accurate, in this article we describe an affinity propagation clustering algorithm for feature selection of sudden machinery failure diagnosis. General methods of feature selection select features by reducing dimension of the features, at the same time changing the data in the feature space, which would result in incorrect answer to the diagnosis. While affinity propagation method is based on measuring similarity between features whereby redundancy therein is removed, and selecting the exemplar subset of features, while doesn't change the data in the feature space. After testing on clustering and taking the result of PCA and affinity propagation clustering as input of a same SVM classifier, we get the conclusion that the latter has lower error than the former.


Author(s):  
Peter M. Ostafichuk ◽  
Masoud Malakoutian ◽  
Mahsa Khalili

This study uses two-stage team quizzes to assess differences in team decision-making based on the factors gender and nationality.  Over 200 teams in two different engineering design courses delivered using Team-Based Learning across five years were considered.  In the two-stage quizzes, individuals first committed to their own answers, and then the team discussed the same questions and answered as a group.  Cases where an individual was incorrect and the team adopted that same incorrect answer were used as a measure of influence of that individual on team decision-making (i.e., “pushing” behaviour by the individual).  Similarly, cases where an individual was correct but the team adopted a different (incorrect) answer were used as a measure of lack of influence (i.e., “switching” behaviour by the individual).  Overall, no significant gender or nationality differences were found in pushing behaviours.  Male students and international students were found to be more likely to engage in switching behaviours.  The overall differences in switching were modest (0.3-0.4% difference per question), but this translates to between 5 and 15 more male/international students engaging in switching behaviours in a typical 75- to 150-student course.


Author(s):  
Arijit Ray ◽  
Michael Cogswell ◽  
Xiao Lin ◽  
Kamran Alipour ◽  
Ajay Divakaran ◽  
...  

Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users’ understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users’ interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly (rho>0.97) with how well users can predict model correctness.


2021 ◽  
Vol 45 (1) ◽  
pp. 182-190
Author(s):  
Justin W. Merry ◽  
Mary Kate Elenchin ◽  
Renee N. Surma

Multiple choice exams are ubiquitous, but advice on test-taking strategies varies and is not always well informed by research. This study evaluated the question of whether students benefit or are harmed when they change their initial answers on multiple choice questions in the context of physiology and biology courses. Previously marked examinations were reviewed for eraser marks that indicated answer changes, and the impact of these changes on exam grades was tabulated. In addition, faculty and students were surveyed for their opinions about changing answers. A plurality of faculty (36%) reported a belief that answer changes usually harm student grades, whereas a slim majority of students (51%) believed that answer changing helped their scores (χ2 = 60.52, P < 0.0001). Empirically, across two exams, students changed their answer from an incorrect answer to a correct one 2.8 times (SD 2.2) compared with 1.0 time (SD 1.4) changing in the negative direction. Therefore, on average, students benefited ( V =  123.5, P < 0.0001) from answer changing. Furthermore, comparing across two exams in the same course, some students were consistently more likely to change their answers than others (adjusted R2= 0.23, P < 0.0001), but the impact of changing answers on the first exam provided no prediction of how much a student would benefit from answer changing on the second exam (adjusted R2= −0.004, P = 0.42). These data support the argument that students should be advised to review and revise responses to exam questions before submitting them.


2020 ◽  
Vol 2 (2) ◽  
pp. 152-161
Author(s):  
Mutakhirani Mustafa

The research aimed at finding out the analyzing of students’ reading comprehension at the second year of SMU Negeri 4 Bulukumba Kab. Bulukumba. The findings of this research that the researcher known the ability of the students’ reading comprehension on level at the second year of SMA Negeri 4 Bulukumba to have “Low” level. Based on the finding indicated that the second year of students’ SMA Negeri 4 Bulukumba in answer or identifying reading comprehension test, there were 60% students got correct answer in ( answer with literal) and 40% students had incorrect answer. Then 40% students had correct answer in (Interpretation) and 60% students had incorrect answer. Then 30% students had correct answer in (Critical) and 70% students had incorrect answer. Then 35% students had correct answer in (Creative) and 65% students had incorrect answer. Based on the finding about the students competence on reading comprehension level, some problems or difficulties were faced by the second grade students of SMA Negeri 4 Bulukumba in analyzing reading test with correct answer which were answer with literal, interpretation, critical and creative. Based on the result of questionnaire of students, the researcher finds out some solution from the problem of the students in reading. The students are suggested to practice reading anything that they like, for example reading a article, reading, essay, reading a book, reading newspaper and reading novel every day. The students practice interpreting what has been reading in improving reading comprehension. Keywords: Student, Reading Comprehension, Level, Analyzing


Sign in / Sign up

Export Citation Format

Share Document