scholarly journals A Diagnostic Assessment for Introductory Molecular and Cell Biology

2010 ◽  
Vol 9 (4) ◽  
pp. 453-461 ◽  
Author(s):  
Jia Shi ◽  
William B. Wood ◽  
Jennifer M. Martin ◽  
Nancy A. Guild ◽  
Quentin Vicens ◽  
...  

We have developed and validated a tool for assessing understanding of a selection of fundamental concepts and basic knowledge in undergraduate introductory molecular and cell biology, focusing on areas in which students often have misconceptions. This multiple-choice Introductory Molecular and Cell Biology Assessment (IMCA) instrument is designed for use as a pre- and posttest to measure student learning gains. To develop the assessment, we first worked with faculty to create a set of learning goals that targeted important concepts in the field and seemed likely to be emphasized by most instructors teaching these subjects. We interviewed students using open-ended questions to identify commonly held misconceptions, formulated multiple-choice questions that included these ideas as distracters, and reinterviewed students to establish validity of the instrument. The assessment was then evaluated by 25 biology experts and modified based on their suggestions. The complete revised assessment was administered to more than 1300 students at three institutions. Analysis of statistical parameters including item difficulty, item discrimination, and reliability provides evidence that the IMCA is a valid and reliable instrument with several potential uses in gauging student learning of key concepts in molecular and cell biology.

Author(s):  
Sri G. Thrumurthy ◽  
Tania Samantha De Silva ◽  
Zia Moinuddin ◽  
Stuart Enoch

Specifically designed to help candidates revise for the MRCS exam, this book features 350 Single Best Answer multiple choice questions, covering the whole syllabus. Containing everything candidates need to pass the MRCS Part A SBA section of the exam, it focuses intensively on the application of basic sciences (applied surgical anatomy, physiology, and pathology) to the management of surgical patients. The high level of detail included within the questions and their explanations allows effective self-assessment of knowledge and quick identification of key areas requiring further attention. Varying approaches to Single Best Answer multiple choice questions are used, giving effective exam practice and guidance through revision and exam technique. This includes clinical case questions, 'positively-worded' questions, requiring selection of the most appropriate of relatively correct answers; 'two-step' or 'double-jump' questions, requiring several cognitive steps to arrive at the correct answer; as well as 'factual recall' questions, prompting basic recall of facts.


2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


Author(s):  
Eva Ratihwulan

<p>This research is aimed to improve students' motivation and learning outcomes. The action used in improving the two things is the learning process with STAD technique. This study uses three stages: Pre Cycle, Cycle I, and Cycle II, each cycle using two meetings. Research data were obtained from a student questionnaire and test with multiple choice questions. Questionnaire to know the development of learning motivation, while the test with multiple choice questions to know the development of learning outcomes. The research data is analyzed descriptively-qualitative. The results of the study explain that the implementation of learning with STAD techniques can improve student's motivation and achievement. Learning motivation in Pre Cycle obtained an average of 23.47 (medium category). While in Cycle I the average learning motivation increased to 26.57 (medium category). Percentage of students achieved high category score of 4 students (13%). Furthermore, in cycle II the average learning motivation of 33.87 (high category). Percentage of students obtained a great and very high category score of 22 students (73.3%). Thus until the end of Cycle II, learning motivation has increased. While the results of learning on the Precycle obtained an average of 61.93 (enough category), in the first cycle increased to 69.53 (enough class), in Cycle II obtained an average of 78.77 (right type). Furthermore, it is known that the percentage of learning mastery in the Pre Cycle of 3%, Cycle I of 31%, then in cycle II the rate of learning mastery increased to 86.66%. Thus until the end of Cycle II, student learning outcomes have increased. Based on the above data can be concluded that the implementation of learning STAD technique can improve learning motivation and student learning outcomes.</p>


2014 ◽  
Vol 2 (4) ◽  
pp. 148 ◽  
Author(s):  
HamzaMohammad Abdulghani ◽  
Farah Ahmad ◽  
Abdulmajeed Aldrees ◽  
MahmoudS Khalil ◽  
GomindaG Ponnamperuma

2008 ◽  
Vol 7 (4) ◽  
pp. 422-430 ◽  
Author(s):  
Michelle K. Smith ◽  
William B. Wood ◽  
Jennifer K. Knight

We have designed, developed, and validated a 25-question Genetics Concept Assessment (GCA) to test achievement of nine broad learning goals in majors and nonmajors undergraduate genetics courses. Written in everyday language with minimal jargon, the GCA is intended for use as a pre- and posttest to measure student learning gains. The assessment was reviewed by genetics experts, validated by student interviews, and taken by >600 students at three institutions. Normalized learning gains on the GCA were positively correlated with averaged exam scores, suggesting that the GCA measures understanding of topics relevant to instructors. Statistical analysis of our results shows that differences in the item difficulty and item discrimination index values between different questions on pre- and posttests can be used to distinguish between concepts that are well or poorly learned during a course.


Author(s):  
Ng Wen Lee ◽  
Wan Noor Farah Wan Shamsuddin ◽  
Lim Chia Wei ◽  
Muhammad Nur Adilin Mohd Anuardi ◽  
Chan Swee Heng ◽  
...  

<span lang="EN-GB">Criticisms on multiple choice questions (MCQs) include the possibility of students answering MCQs correctly by guessing, and MCQs generally are said to fall short in cultivating independent learning skills, such as taking charge of their learning goals. Countering these common concerns, this research used online MCQ exercises with multiple attempts to investigate the experiences that drove students to become self-directed learners. In this research, 60 students completed two sets of online MCQ exercises with multiple attempts outside of classroom time for six weeks consecutively. Both focus group interviews and an online survey were conducted to investigate the experiences of using online MCQ exercise with multiple attempts in relation to the development of self-directed learning (SDL). The findings of the study showed that the criticisms may be unfounded. Data leads to the conclusion that the majority of the students do not just try to guess at the correct answers. Rather, many of them attempted the online MCQ exercises more than once to improve themselves indicating that they were interested in self-learning. Students also reported that they utilised search and inquiry skills that clearly showed motivated initiatives to plan how to overcome their weaknesses by independently looking for relevant resources, determine their own learning goals, and evaluate their own learning performance as a firm indicator of SDL development. Based on the findings, this study is able to refute the claim that MCQs are unable to cultivate independent learning skills.</span>


2021 ◽  
Vol 20 (2) ◽  
Author(s):  
Siti Khadijah Adam ◽  
Faridah Idris ◽  
Puteri Shanaz Jahn Kassim ◽  
Nor Fadhlina Zakaria ◽  
Rafidah Hod

Background: Multiple-choice questions (MCQs) are used for measuring the student’s progress, and they should be analyzed properly to guarantee the item’s appropriateness. The analysis usually determines three indices of an item; difficulty or passing index (PI), discrimination index (DI), and distractor efficiency (DE). Objectives: This study was aimed to analyze the multiple-choice questions in the preclinical and clinical examinations with different numbers of options in medical program of Universiti Putra Malaysia. Methods: This is a cross-sectional study. Forty multiple-choice questions with four options from the preclinical examination and 80 multiple-choice questions with five options from the clinical examination in 2017 and 2018 were analyzed using optical mark recognition machine and Ms. Excel. The parameters included PI, DI, and DE. Results: The average difficulty level of multiple-choice questions for preclinical and clinical phase examinations were similar in 2017 and 2018 that were considered ‘acceptable’ and ‘ideal’ ranged from 0.55 to 0.60, respectively. The average DIs were similar in all examinations that were considered ‘good’ (ranged from 0.25 to 0.31) except in 2018 clinical phase examination that showed ‘poor’ items (DI = 0.20 ± 0.11). The questions for preclinical phase showed an increase in the number of ‘excellent’ and ‘good’ items in 2018 from 37.5% to 70.0%. There was an increase of 10.0% for preclinical phase, and 6.25% for clinical phase, in the number of items with no non-functioning distractors in 2018. Among all, preclinical multiple-choice questions in 2018 showed the highest mean of DE (71.67%). Conclusions: Our findings suggested that there was an improvement in the questions from preclinical phase while more training on questions preparation and continuous feedback should be given to clinical phase teachers. A higher number of options did not affect the level of difficulty of a question; however, the discrimination power and distractors efficiency might differ.


2021 ◽  
Vol 6 (1) ◽  
pp. 32-44
Author(s):  
Rizky Prasetya

Teaching English Foreign Language (TEFL) has shifted complex for developing countries, especially Indonesia. Pandemic conditions bound conventional face-to-face teaching. A virtual classroom is another way of implementing the learning and teaching process. The purpose study investigates and describes the selection of English teaching based-strategy LMS Moodle and Google Classroom, particularly the testing and feedback feature. The qualitative research approached grounded theory. The entire data was collected by the questioner using purposeful sampling. The testing teaching strategies LMS Moodle discovered multiple-choice, short answer, essay, true/false statement, and missing word. Even though testing teaching strategies, Google Classroom was found multiple-choice, short answer, and essay. The selection of assessments is one of the benchmarks for English lecturers' readiness to teach using Moodle and Google classroom media. The testing variety is capable of encouraging students to perform well in e-learning. Feedback selection classifications were discovered, formal feedback, formative feedback, and summative feedback. Formal feedback is designed and regularly scheduled for the process. Formal feedback is designed and regularly scheduled for the process. The formative feedback is to observe student learning to provide continuous feedback. The summative testing evaluates student learning at the end of an instructional unit by comparing it against some patterns


Author(s):  
David Metcalfe ◽  
Harveer Dev

SJTs are commonly used by organizations for personnel selection. They aim to provide realistic, but hypothetical, scenarios and possible answers which are either selected or ranked by the candidate. One such test will contribute half, or significantly more than half, the score used by applicants to the UK Foundation Programme. The test will involve a single paper over two hours and twenty minutes in which candidates will answer 70 questions. This equates to approximately two minutes per question. Your response to 60 questions will be included in your final score, while ten questions embedded throughout the test will be pilot questions which are designed to be validated but not counted in your final score. You will not be able to differentiate pilot from genuine test questions and should answer every question as if it ‘counts’. In one SJT pilot, 96% of candidates finished the test within two hours, which provides some indication about the time pressure. It is important to answer all questions and not simply ‘guess’ those left at the end. Although the SJT is not negatively marked, random guesses are not allocated points. The scoring software will identify guesses by looking for unusual or sporadic answer patterns. The SJT will be held locally by individual medical schools under invigilated conditions. Therefore, your medical school should be in touch about specific local arrangements. Each SJT paper will include a selection of questions, each mapped to a specific professional attribute. Questions should be evenly distributed between attributes and between scenario type, i.e. ‘patient’, ‘colleague’, or ‘personal’. The SJT will include two types of question: ● multiple choice questions (approximately one- third) ● ranking questions (approximately two- thirds). These begin with a scenario and provide eight possible answers. Three of these are correct and should be selected. The remaining five are incorrect. The example in Box 2.1 provides an illustrative medical school scenario. For questions based around Foundation Programme scenarios, over 100 examples are provided for practice from onwards.


Sign in / Sign up

Export Citation Format

Share Document