Peer instruction enhanced student performance on qualitative problem-solving questions

2006 ◽  
Vol 30 (4) ◽  
pp. 168-173 ◽  
Author(s):  
Mauricio J. Giuliodori ◽  
Heidi L. Lujan ◽  
Stephen E. DiCarlo

We tested the hypothesis that peer instruction enhances student performance on qualitative problem-solving questions. To test this hypothesis, qualitative problems were included in a peer instruction format during our Physiology course. Each class of 90 min was divided into four to six short segments of 15 to 20 min each. Each short segment was followed by a qualitative problem-solving scenario that could be answered with a multiple-choice quiz. All students were allowed 1 min to think and to record their answers. Subsequently, students were allowed 1 min to discuss their answers with classmates. Students were then allowed to change their first answer if desired, and both answers were recorded. Finally, the instructor and students discussed the answer. Peer instruction significantly improved student performance on qualitative problem-solving questions (59.3 ± 0.5% vs. 80.3 ± 0.4%). Furthermore, after peer instruction, only 6.5% of the students changed their correct response to an incorrect response; however, 56.8% of students changed their incorrect response to a correct response. Therefore, students with incorrect responses changed their answers more often than students with correct responses. In conclusion, pausing four to six times during a 90-min class to allow peer instruction enhanced student performance on qualitative problem-solving questions.

2017 ◽  
Vol 32 (4) ◽  
pp. 1-17 ◽  
Author(s):  
Dianne Massoudi ◽  
SzeKee Koh ◽  
Phillip J. Hancock ◽  
Lucia Fung

ABSTRACT In this paper we investigate the effectiveness of an online learning resource for introductory financial accounting students using a suite of online multiple choice questions (MCQ) for summative and formative purposes. We found that the availability and use of an online resource resulted in improved examination performance for those students who actively used the online learning resource. Further, we found a positive relationship between formative MCQ and unit content related to challenging financial accounting concepts. However, better examination performance was also linked to other factors, such as prior academic performance, tutorial participation, and demographics, including gender and attending university as an international student. JEL Classifications: I20; M41.


2017 ◽  
Vol 16 (1) ◽  
pp. ar7 ◽  
Author(s):  
Xiaoying Xu ◽  
Jennifer E. Lewis ◽  
Jennifer Loertscher ◽  
Vicky Minderhout ◽  
Heather L. Tienson

Multiple-choice assessments provide a straightforward way for instructors of large classes to collect data related to student understanding of key concepts at the beginning and end of a course. By tracking student performance over time, instructors receive formative feedback about their teaching and can assess the impact of instructional changes. The evidence of instructional effectiveness can in turn inform future instruction, and vice versa. In this study, we analyzed student responses on an optimized pretest and posttest administered during four different quarters in a large-enrollment biochemistry course. Student performance and the effect of instructional interventions related to three fundamental concepts—hydrogen bonding, bond energy, and pKa—were analyzed. After instructional interventions, a larger proportion of students demonstrated knowledge of these concepts compared with data collected before instructional interventions. Student responses trended from inconsistent to consistent and from incorrect to correct. The instructional effect was particularly remarkable for the later three quarters related to hydrogen bonding and bond energy. This study supports the use of multiple-choice instruments to assess the effectiveness of instructional interventions, especially in large classes, by providing instructors with quick and reliable feedback on student knowledge of each specific fundamental concept.


2021 ◽  
pp. 016327872110469
Author(s):  
Peter Baldwin ◽  
Janet Mee ◽  
Victoria Yaneva ◽  
Miguel Paniagua ◽  
Jean D’Angelo ◽  
...  

One of the most challenging aspects of writing multiple-choice test questions is identifying plausible incorrect response options—i.e., distractors. To help with this task, a procedure is introduced that can mine existing item banks for potential distractors by considering the similarities between a new item’s stem and answer and the stems and response options for items in the bank. This approach uses natural language processing to measure similarity and requires a substantial pool of items for constructing the generating model. The procedure is demonstrated with data from the United States Medical Licensing Examination (USMLE®). For about half the items in the study, at least one of the top three system-produced candidates matched a human-produced distractor exactly; and for about one quarter of the items, two of the top three candidates matched human-produced distractors. A study was conducted in which a sample of system-produced candidates were shown to 10 experienced item writers. Overall, participants thought about 81% of the candidates were on topic and 56% would help human item writers with the task of writing distractors.


2008 ◽  
Vol 72 (10) ◽  
pp. 1149-1159 ◽  
Author(s):  
Thomas J. Prihoda ◽  
R. Neal Pinckard ◽  
C. Alex McMahan ◽  
John H. Littlefield ◽  
Anne Cale Jones

2018 ◽  
Vol 42 (4) ◽  
pp. 661-667 ◽  
Author(s):  
Fabíola da Silva Albuquerque ◽  
Temilce Simões de Assis ◽  
Francisco Antônio de Oliveira Júnior ◽  
Maria Regina de Freitas ◽  
Rita de Cássia da Silveira e Sá ◽  
...  

A group of teachers from Northeast Brazil developed a model of membrane potentials and action potential and tested the hypothesis that using the peer-instruction model would provide a better performance for students in reading traditional texts and lectures. The results were obtained from 357 students from 20 different courses in 9 different undergraduate programs. All students attended two 100-min theoretical lecture and, at the end of the second lecture, were asked to answer a multiple-choice question (a pretest). In the following lecture, students were divided into three groups: control, text, and model. At the end of the lecture, everyone responded to a posttest. Student performance in the pretest did not differ significantly between groups. In the comparison between the pretest and the posttest, students in the model and text groups significantly improved their performance, but there was no improvement in the control group. In the posttest, the model group presented a better performance than the control group. In the evaluation of the strategies used, 46% of the students indicated that the text would be very useful to remind them about the subject in the future, whereas 80% of those who used the model indicated that it would be very useful or extremely useful. useful. Although it was not possible to support the hypothesis conclusively, the performance model group, at least in part, was due to the use of active methodologies that constitute a differential in the teaching-learning process.


2020 ◽  
Vol 8 (3) ◽  
pp. 725-736
Author(s):  
Maria Dewati ◽  
A. Suparmi ◽  
Widha Sunarno ◽  
Sukarmin ◽  
C. Cari

Purpose of study: This study aims to measure the level of students' problem-solving skills, using assessment instruments in the form of multiple-choice tests based on the multiple representation approach on DC electrical circuits. Methodology: This research is a quantitative descriptive involving 46 students of physics education. Students are asked to solve the problem of DC electrical circuits on 12 multiple choice questions with open reasons, involving verbal, mathematical, and picture representations. Data were analyzed by determining means and standard deviations. Main findings: The results of the study showed that there were 3 levels of students' problem-solving skills, namely 7 (15%) students in the high category, 22 (48%) students in the medium category and 17 (37%) students in the low category. Applications of this study: The implication of this research is to continuously develop assessment instruments based on multiple representations in the form of various types of tests, to help students improve their conceptual understanding, so students can solve physics problems correctly. The novelty of this study: Researchers explain the right way to solve physics problems, 1) students are trained to focus on identifying problems, 2) students are accustomed to planning solutions using a clear approach, to build an understanding of concepts, 3) students are directed to solve problems accordingly with understanding the concepts they have built.


Author(s):  
José Antonio González ◽  
Mónica Giuliano ◽  
Silvia N. Pérez

AbstractResearch on impact in student achievement of online homework systems compared to traditional methods is ambivalent. Methodological issues in the study design, besides of technological diversity, can account for this uncertainty. Hypothesis This study aims to estimate the effect size of homework practice with exercises automatically provided by the ‘e-status’ platform, in students from five Engineering programs. Instead of comparing students using the platform with others not using it, we distributed the subject topics into two blocks, and created nine probability problems for each block. After that, the students were randomly assigned to one block and could solve the related exercises through e-status. Teachers and evaluators were masked to the assignation. Five weeks after the assignment, all students answered a written test with questions regarding all topics. The study outcome was the difference between both blocks’ scores obtained from the test. The two groups comprised 163 and 166 students. Of these, 103 and 107 respectively attended the test, while the remainder were imputed with 0. Those assigned to the first block obtained an average outcome of −1.85, while the average in the second block was −3.29 (95% confidence interval of difference, −2.46 to −0.43). During the period in which they had access to the platform before the test, the average total time spent solving problems was less than three hours. Our findings provide evidence that a small amount of active online work can positively impact on student performance.


Author(s):  
Netravathi B. Angadi ◽  
Amitha Nagabhushana ◽  
Nayana K. Hashilkar

Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and Distractor efficiency (DE). Item analysis is a valuable yet relatively simple procedure, performed after the examination that provides information regarding the reliability and validity of a test item. The objective of this study was to perform an item analysis of MCQs for testing their validity parameters.Methods: 50 items consisting of 150 distractors were selected from the formative exams. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analysed for three parameters such as DIF I, DI, and DE.Results: A total of 50 items consisting of 150 Distractor s were analysed. DIF I of 31 (62%) items were in the acceptable range (DIF I= 30-70%) and 30 had ‘good to excellent’ (DI >0.25). 10 (20%) items were too easy and 9 (18%) items were too difficult (DIF I <30%). There were 4 items with 6 non-functional Distractor s (NFDs), while the rest 46 items did not have any NFDs.Conclusions: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard or modify the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.


Sign in / Sign up

Export Citation Format

Share Document