scholarly journals Assessment Question Characteristics Predict Medical Student Performance in General Pathology

Author(s):  
Tahyna Hernandez ◽  
Margret S. Magid ◽  
Alexandros D. Polydorides

Context.— Evaluation of medical curricula includes appraisal of student assessments in order to encourage deeper learning approaches. General pathology is our institution's 4-week, first-year course covering universal disease concepts (inflammation, neoplasia, etc). Objective.— To compare types of assessment questions and determine which characteristics may predict student scores, degree of difficulty, and item discrimination. Design.— Item-level analysis was employed to categorize questions along the following variables: type (multiple choice question or matching answer), presence of clinical vignette (if so, whether simple or complex), presence of specimen image, information depth (simple recall or interpretation), knowledge density (first or second order), Bloom taxonomy level (1–3), and, for the final, subject familiarity (repeated concept and, if so, whether verbatim). Results.— Assessments comprised 3 quizzes and 1 final exam (total 125 questions), scored during a 3-year period (total 417 students) for a total 52 125 graded attempts. Overall, 44 890 attempts (86.1%) were correct. In multivariate analysis, question type emerged as the most significant predictor of student performance, degree of difficulty, and item discrimination, with multiple choice questions being significantly associated with lower mean scores (P = .004) and higher degree of difficulty (P = .02), but also, paradoxically, poorer discrimination (P = .002). The presence of a specimen image was significantly associated with better discrimination (P = .04), and questions requiring data interpretation (versus simple recall) were significantly associated with lower mean scores (P = .003) and a higher degree of difficulty (P = .046). Conclusions.— Assessments in medical education should comprise combinations of questions with various characteristics in order to encourage better student performance, but also obtain optimal degrees of difficulty and levels of item discrimination.

2017 ◽  
Vol 32 (4) ◽  
pp. 1-17 ◽  
Author(s):  
Dianne Massoudi ◽  
SzeKee Koh ◽  
Phillip J. Hancock ◽  
Lucia Fung

ABSTRACT In this paper we investigate the effectiveness of an online learning resource for introductory financial accounting students using a suite of online multiple choice questions (MCQ) for summative and formative purposes. We found that the availability and use of an online resource resulted in improved examination performance for those students who actively used the online learning resource. Further, we found a positive relationship between formative MCQ and unit content related to challenging financial accounting concepts. However, better examination performance was also linked to other factors, such as prior academic performance, tutorial participation, and demographics, including gender and attending university as an international student. JEL Classifications: I20; M41.


Author(s):  
Pilar Gandía Herrero ◽  
Agustín Romero Medina

The quality of academic performance and learning outcomes depend on various factors, both psychological and contextual. The academic context includes the training activities and the type of evaluation or examination, which also influences cognitive and motivational factors, such as learning and study approaches and self-regulation. In our university context, the predominant type of exam is that of multiple-choice questions. The cognitive requirement of these questions may vary. From Bloom's typical taxonomy, it is considered that from lower to higher cognitive demand we have questions about factual, conceptual, application knowledge, etc. Normally, the teacher does not take these classifications into account when preparing this type of exam. We propose here an adaptation model of the multiple choice questions classification according to cognitive requirement (associative memorization, comprehension, application), putting it to the test analyzing an examination of a subject in Psychology Degree and relating the results with measures of learning approaches (ASSIST and R-SPQ-2F questionnaires) and self-regulation in a sample of 87 subjects. The results show differential academic performance according to "cognitive" types of questions and differences in approaches to learning and self-regulation. The convenience of taking into account these factors of cognitive requirement when elaborating multiple choice questions is underlined.


2020 ◽  
Author(s):  
Zainal Abidin

National Examination and Cambridge Checkpoint are the instrument for evaluating the standard competence ofstudent which organized in Secondary Level. National Examination’s questions based on the National Curriculum ofIndonesia but Cambridge Checkpoint’s questions taken based on Cambridge Curriculum. The aims of this researchis analyzing the type of each question and distribution of each strands in the National Mathematics Examination 2015and Mathematics of Cambridge Checkpoint for Secondary Level 2015. This type of research is a descriptive studywith a qualitative approach. National Mathematics Examination 2015 has one paper only but Mathematics ofCambridge Checkpoint for Secondary Level 2015 has 2 papers for the test. It can be concluded that all question’stype of the National Mathematics Examination for Secondary Level 2015 are multiple choice questions. OnMathematics of Cambridge Checkpoint for Secondary Level 2015, there are various types of questions which consistof 11,43% short-answer question; 68,57% analysis question; 8,57% completing question; and 11,43% match questionfor paper 1, but 22,22% short-answer question; 58,33% analysis question; 11,11% completing question; 2,78% matchquestion; 2,78% multiple choice question; and 2,78% yes/no question for paper 2. Based on strands analyzing result,It can be determined that National Mathematics Examination for Secondary Level 2015 contain of 22,25% number;27,5 algebra; 40% geometry and measurement; 10% statistic and probability. On Mathematics of CambridgeCheckpoint for Secondary Level 2015, It can be explained that 45,72% number; 20% algebra; 17,14% geometry andmeasurement; and 17,14% statistic and probability for paper 1, and 33,33% number; 19,45% algebra; 25% geometryand measurement; and 22,22% statistic and probability for paper 2.


2019 ◽  
Vol 8 (1) ◽  
pp. 1
Author(s):  
Sherry Fukuzawa ◽  
Michael DeBraga

Graded Response Method (GRM) is an alternative to multiple-choice testing where students rank options accordingto their relevance to the question. GRM requires discrimination and inference between statements and is acost-effective critical thinking assessment in large courses where open-ended answers are not feasible. This studyexamined critical thinking assessment in GRM versus open-ended and multiple-choice questions composed fromBloom’s taxonomy in an introductory undergraduate course in anthropology and archaeology (N=53students).Critical thinking was operationalized as the ability to assess a question with evidence to support or evaluatearguments (Ennis, 1993). We predicted that students who performed well on multiple-choice from Bloom’staxonomy levels 4-6 and open-ended questions would perform well on GRM involving similar concepts. Highperforming students on GRM were predicted to have higher course grades. The null hypothesis was question typewould not have an effect on critical thinking assessment. In two quizzes, there was weak correlation between GRMand open-ended questions (R2=0.15), however there was strong correlation in the exam (R2=0.56). Correlations wereconsistently higher between GRM and multiple-choice from Bloom’s taxonomy levels 4-6 (R2=0.23,0.31,0.21)versus levels 1-3 (R2=0.13,0.29,0.18). GRM is a viable alternative to multiple-choice in critical thinking assessmentwithout added resources and grading efforts.


2015 ◽  
Vol 39 (4) ◽  
pp. 327-334 ◽  
Author(s):  
Brandon M. Franklin ◽  
Lin Xiang ◽  
Jason A. Collett ◽  
Megan K. Rhoads ◽  
Jeffrey L. Osborn

Student populations are diverse such that different types of learners struggle with traditional didactic instruction. Problem-based learning has existed for several decades, but there is still controversy regarding the optimal mode of instruction to ensure success at all levels of students' past achievement. The present study addressed this problem by dividing students into the following three instructional groups for an upper-level course in animal physiology: traditional lecture-style instruction (LI), guided problem-based instruction (GPBI), and open problem-based instruction (OPBI). Student performance was measured by three summative assessments consisting of 50% multiple-choice questions and 50% short-answer questions as well as a final overall course assessment. The present study also examined how students of different academic achievement histories performed under each instructional method. When student achievement levels were not considered, the effects of instructional methods on student outcomes were modest; OPBI students performed moderately better on short-answer exam questions than both LI and GPBI groups. High-achieving students showed no difference in performance for any of the instructional methods on any metric examined. In students with low-achieving academic histories, OPBI students largely outperformed LI students on all metrics (short-answer exam: P < 0.05, d = 1.865; multiple-choice question exam: P < 0.05, d = 1.166; and final score: P < 0.05, d = 1.265). They also outperformed GPBI students on short-answer exam questions ( P < 0.05, d = 1.109) but not multiple-choice exam questions ( P = 0.071, d = 0.716) or final course outcome ( P = 0.328, d = 0.513). These findings strongly suggest that typically low-achieving students perform at a higher level under OPBI as long as the proper support systems (formative assessment and scaffolding) are provided to encourage student success.


Author(s):  
Yesim Ozer Ozkan ◽  
Nesrin Ozaslan

The aim of this study is to determine the level of achievement of students participating in Programme for International Student Assessment (PISA) 2003 and PISA 2012 tests in Turkey according to questions in the mathematical literacy test. This study is a descriptive survey. Within the scope of the study, the mathematical literacy test items were classified as multiple-choice, complex multiple-choice and constructed response items according to the different question types. The ratio of correct and partially correct and incorrect response given to each question type has been determined. Findings show that the achievements of students differ according to different types of questions. While the question type with the highest success average in the PISA 2003 test was multiple-choice, students got the highest scores from complex multiple-choice questions in the PISA 2012 test. The questionnaire with the lowest success average was found to be complex multiple-choice questions in the PISA 2003 test while students got the lowest scores from constructed response items in the PISA 2012 test. According to the constructivist education approach effectuated in 2005-2006 academic year, it is expected to observe a rise in constructed response question type; however, findings of the study reveal that the success of constructed response questions is decreased according to the application years.


2006 ◽  
Vol 5 (3) ◽  
pp. 270-280 ◽  
Author(s):  
Elizabeth Kitchen ◽  
Summer H. King ◽  
Diane F. Robison ◽  
Richard R. Sudweeks ◽  
William S. Bradshaw ◽  
...  

In this article we report a 3-yr study of a large-enrollment Cell Biology course focused on developing student skill in scientific reasoning and data interpretation. Specifically, the study tested the hypothesis that converting the role of exams from summative grading devices to formative tools would increase student success in acquiring those skills. Traditional midterm examinations were replaced by weekly assessments administered under test-like conditions and followed immediately by extensive self, peer, and instructor feedback. Course grades were criterion based and derived using data from the final exam. To alleviate anxiety associated with a single grading instrument, students were given the option of informing the grading process with evidence from weekly assessments. A comparative analysis was conducted to determine the impact of these design changes on both performance and measures of student affect. Results at the end of each year were used to inform modifications to the course in subsequent years. Significant improvements in student performance and attitudes were observed as refinements were implemented. The findings from this study emphasized the importance of prolonging student opportunity and motivation to improve by delaying grade decisions, providing frequent and immediate performance feedback, and designing that feedback to be maximally formative and minimally punitive.


Sign in / Sign up

Export Citation Format

Share Document