scholarly journals The self-assessment dilemma: an open-source, ethical method using Matlab to formulate multiple-choice quiz questions for online reinforcement

2018 ◽  
Vol 42 (4) ◽  
pp. 697-703
Author(s):  
Harry J. Witchel ◽  
Joseph H. Guppy ◽  
Claire F. Smith

Student self-assessment using computer-based quizzes has been shown to increase subject memory and engagement. Some types of self-assessment quizzes can be associated with a dilemma between 1) medical students who want the self-assessment quiz to be clearly related to upcoming summative assessments or curated by the exam-setters, and 2) university administrators and ethics committees who want clear guarantees that the self-assessment quizzes are not based on the summative assessments or made by instructors familiar with the exam bank of items. An algorithm in Matlab was developed to formulate multiple-choice questions for both ion transport proteins and pharmacology. A resulting question/item subset was uploaded to the Synap online self-quiz web platform, and 48 year 1 medical students engaged with it for 3 wk. Anonymized engagement statistics for students were provided by the Synap platform, and a paper-based exit questionnaire with an 80% response rate ( n = 44) measured satisfaction. Four times as many students accessed the quiz system via laptop compared with phone/tablet. Of 391 questions/items, over 11,749 attempts were made. Greater than 80% of respondents agreed with each of the positive statements (ease of use, enjoyed, engaged more, learned more, and wanted it to be extended to other modules). Despite simplistic questions and rote memorization, the questions developed by this system were engaged with and were received positively. Students strongly supported extending the system.

Author(s):  
Sri G. Thrumurthy ◽  
Tania Samantha De Silva ◽  
Zia Moinuddin ◽  
Stuart Enoch

Specifically designed to help candidates revise for the MRCS exam, this book features 350 Single Best Answer multiple choice questions, covering the whole syllabus. Containing everything candidates need to pass the MRCS Part A SBA section of the exam, it focuses intensively on the application of basic sciences (applied surgical anatomy, physiology, and pathology) to the management of surgical patients. The high level of detail included within the questions and their explanations allows effective self-assessment of knowledge and quick identification of key areas requiring further attention. Varying approaches to Single Best Answer multiple choice questions are used, giving effective exam practice and guidance through revision and exam technique. This includes clinical case questions, 'positively-worded' questions, requiring selection of the most appropriate of relatively correct answers; 'two-step' or 'double-jump' questions, requiring several cognitive steps to arrive at the correct answer; as well as 'factual recall' questions, prompting basic recall of facts.


Best of Five MCQs for the Acute Medicine SCE is a new revision resource designed specifically for this high-stakes exam. Containing over 350 Best of Five multiple choice questions, this dedicated guide will help candidates to prepare successfully. The content mirrors the SCE in Acute Medicine Blueprint to ensure candidates are fully prepared for all the topics that may appear in the exam. Topics range from how to manage acute problems in cardiology or neurology to managing acute conditions such as poisoning. All answers have full explanations and further reading to ensure high quality self-assessment and quick recognition of areas that require further study.


2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


PEDIATRICS ◽  
1975 ◽  
Vol 56 (4) ◽  
pp. 623-624

PERSONAL ASSESSMENT FOR CONTINUING EDUCATION (PACE): Presented by the American Academy of Pediatrics, PACE is a series of six three-hour written selfscored, self-assessment examinations designed to keep physicians abreast of advances in the field of pediatrics. Each PACE packet contains multiple-choice questions and patient management problems along with answer keys, normative data, and bibliographic references. PACE packets will be mailed at three-month intervals over the next 18 months. The cost for the entire six-part series is $50 for nonmembers.


2021 ◽  
Author(s):  
Joachim Neumann ◽  
Stephanie Simmrodt ◽  
Beatrice Bader ◽  
Bertram Opitz ◽  
Ulrich Gergs

BACKGROUND There remain doubts about whether multiple choice answer formats (single choice) offer the best option to encourage deep learning or whether SC formats simply lead to superficial learning or cramming. Moreover, cueing is always a drawback in the SC format. Another way to assess knowledge is true multiple-choice questions in which one or more answers can be true and the student is not aware of how many true answers are to be anticipated (K´ or Kprime question format). OBJECTIVE Here, we compared both single-choice answers (one true answer, SC) with Kprime answers (one to four true answers out of four answers, Kprime) for the very same learning objectives in a study of pharmacology in medical students. METHODS Two groups of medical students were randomly subjected to a formative online test: group A) was first given 15 SC (#1-15) followed by 15 different Kprime questions (#16-30). The opposite design was used for group B. RESULTS The mean number of right answers was higher for SC than for Kprime questions in group A (10.02 vs. 8.63, p < 0.05) and group B (9.98 vs. 6.66, p < 0.05). The number of right answers was higher for nine questions of SC compared to Kprime in group A and for eight questions in group B (pairwise T-Test, p < 0.05). Thus, SC is easier to answer than the same learning objectives in pharmacology given as Kprime questions. One year later, four groups were formed from the previous two groups and were again given the same online test but in a different order: the main result was that all students fared better in the second test than in the initial test; however, the gain in points was highest if initially mode B was given. CONCLUSIONS Kprime is less popular with students being more demanding, but could improve memory of subject matter and thus might be more often used by meidcal educators.


2019 ◽  
Vol 5 (1) ◽  
pp. e000495
Author(s):  
Danielle L Cummings ◽  
Matthew Smith ◽  
Brian Merrigan ◽  
Jeffrey Leggit

BackgroundMusculoskeletal (MSK) complaints comprise a large proportion of outpatient visits. However, multiple studies show that medical school curriculum often fails to adequately prepare graduates to diagnose and manage common MSK problems. Current standardised exams inadequately assess trainees’ MSK knowledge and other MSK-specific exams such as Freedman and Bernstein’s (1998) exam have limitations in implementation. We propose a new 30-question multiple choice exam for graduating medical students and primary care residents. Results highlight individual deficiencies and identify areas for curriculum improvement.Methods/ResultsWe developed a bank of multiple choice questions based on 10 critical topics in MSK medicine. The questions were validated with subject-matter experts (SMEs) using a modified Delphi method to obtain consensus on the importance of each question. Based on the SME input, we compiled 30 questions in the assessment. Results of the large-scale pilot test (167 post-clerkship medical students) were an average score of 74 % (range 53% – 90 %, SD 7.8%). In addition, the tool contains detailed explanations and references were created for each question to allow an individual or group to review and enhance learning.SummaryThe proposed MSK30 exam evaluates clinically important topics and offers an assessment tool for clinical MSK knowledge of medical students and residents. It fills a gap in current curriculum and improves on previous MSK-specific assessments through better clinical relevance and consistent grading. Educators can use the results of the exam to guide curriculum development and individual education.


1992 ◽  
Vol 36 (4) ◽  
pp. 356-360 ◽  
Author(s):  
Cortney G. Vargo ◽  
Clifford E. Brown ◽  
Sarah J. Swierenga

This study was designed to investigate whether computer-supported backtracking tools reduced navigation time over manual backtracking and to compare navigation times among a subset of four backtracking tools. Each tool was evaluated in the context of an experimental, hierarchical, direct-manipulation database. Trials consisted of an information retrieval task requiring subjects to answer multiple-choice questions about the contents of the database. The independent variables included the backtracking tool and the backtrack navigation Task Length. The dependent measures included navigation time, the frequency with which the computer tool was selected and used over manual backtracking (a Table of Contents), and questionnaire responses. Backtracking with any of the four computer-supported tools resulted in a significantly reduced navigation time over manual backtracking using the Table of Contents. When provided with a history list, subjects had significantly smaller navigation times when backtracking at the higher of two levels in the database hierarchy. There were no differences between computer tools in rated efficiency, ease of use, or objective or subjective preference measures.


Sign in / Sign up

Export Citation Format

Share Document