Effect of Test Expectancy on Preferred Study Strategy Use and Test Performance

1989 ◽  
Vol 68 (3_suppl) ◽  
pp. 1157-1158 ◽  
Author(s):  
Ronald C. Feldt ◽  
Michelle Ray

Among undergraduates in a carefully controlled design, 9 students who took notes studied longer than groups of 30 and 17 who did not. No differences were observed on test scores, retention interval, comprehension scores, or reading rate. Whether students expected multiple-choice or free-recall testing, strategies were similar, suggesting study oriented to rote learning.

1979 ◽  
Vol 44 (3_suppl) ◽  
pp. 1051-1054
Author(s):  
Bruce R. Dunn

Past research has shown that grouping related multiple-choice test items together does not increase students' performance on power tests, even when those groupings are sequenced in the order of class presentation. The present research examined the hypothesis, derived from the cue-dependent forgetting hypothesis, that grouping of related test items does not improve test performance because grouping per se is not a sufficiently powerful retrieval cue. Two experiments were conducted to determine whether specific cueing (placing author headings and subheadings above related blocks of test items) increased students' test scores. Results for both were negative; specific cueing did not significantly increase mean test scores. The ecological validity of the cue-dependent hypothesis was questioned.


1987 ◽  
Vol 60 (1) ◽  
pp. 145-146 ◽  
Author(s):  
Duane R. Kauffmann ◽  
Brenda Chupp ◽  
Kent Hershberger ◽  
Lisa Martin ◽  
Ken Eastman

This research explored relationships between Eison's LOGO instrument and several personality and academic measures in a credit/no credit psychology course ( N = 44). Learning orientation (LO) was correlated with dogmatism and marginally (.07) with multiple-choice test performance, but not with Machiavellianism, Locus of Control, or performance on course written assignments. Grade orientation was related to Machiavellianism and marginally (.07) to test scores. Self-rating of orientation was correlated with both written and test performance, but not LO or GO.


2020 ◽  
Author(s):  
THOMAS PUTHIAPARAMPIL ◽  
Md Mizanur Rahman

Abstract Background Multiple choice questions, used in medical school assessments for decades, have many drawbacks, such as: hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates. Directly asked and answered questions like Very Short Answer Questions (VSAQ) is considered a better alternative with several advantages. Objectives This study aims to substantiate the superiority of VSAQ by actual tests and obtaining feedback from the stakeholders. Methods Conduct multiple true-false, one best answer and VSAQ tests in two batches of medical students, compare their scores and psychometric indexes of the tests and seek opinions from students and academics regarding these assessment methods. Results Multiple true-false and best answer test scores showed skewed results and low psychometric performance compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders’ opinions were significantly in favour of VSAQ. Conclusion and recommendation This study concludes that VSAQ is a viable alternative to multiple choice question tests, and it is widely accepted by medical students and academics in the medical faculty.


1977 ◽  
Vol 8 (1) ◽  
pp. 5-14 ◽  
Author(s):  
David L. Ratusnik ◽  
Roy A. Koenigsknecht

Six speech and language clinicians, three black and three white, administered the Goodenough Drawing Test (1926) to 144 preschoolers. The four groups, lower socioeconomic black and white and middle socioeconomic black and white, were divided equally by sex. The biracial clinical setting was shown to influence test scores in black preschool-age children.


Author(s):  
Tomoharu Iwata ◽  
Tomoko Kojiri ◽  
Takeshi Yamada ◽  
Toyohide Watanabe
Keyword(s):  

Author(s):  
David DiBattista ◽  
Laura Kurzawa

Because multiple-choice testing is so widespread in higher education, we assessed the quality of items used on classroom tests by carrying out a statistical item analysis. We examined undergraduates’ responses to 1198 multiple-choice items on sixteen classroom tests in various disciplines. The mean item discrimination coefficient was +0.25, with more than 30% of items having unsatisfactory coefficients less than +0.20. Of the 3819 distractors, 45% were flawed either because less than 5% of examinees selected them or because their selection was positively rather than negatively correlated with test scores. In three tests, more than 40% of the items had an unsatisfactory discrimination coefficient, and in six tests, more than half of the distractors were flawed. Discriminatory power suffered dramatically when the selection of one or more distractors was positively correlated with test scores, but it was only minimally affected by the presence of distractors that were selected by less than 5% of examinees. Our findings indicate that there is considerable room for improvement in the quality of many multiple-choice tests. We suggest that instructors consider improving the quality of their multiple-choice tests by conducting an item analysis and by modifying distractors that impair the discriminatory power of items. Étant donné que les examens à choix multiple sont tellement généralisés dans l’enseignement supérieur, nous avons effectué une analyse statistique des items utilisés dans les examens en classe afin d’en évaluer la qualité. Nous avons analysé les réponses des étudiants de premier cycle à 1198 questions à choix multiples dans 16 examens effectués en classe dans diverses disciplines. Le coefficient moyen de discrimination de l’item était +0.25. Plus de 30 % des items avaient des coefficients insatisfaisants inférieurs à + 0.20. Sur les 3819 distracteurs, 45 % étaient imparfaits parce que moins de 5 % des étudiants les ont choisis ou à cause d’une corrélation négative plutôt que positive avec les résultats des examens. Dans trois examens, le coefficient de discrimination de plus de 40 % des items était insatisfaisant et dans six examens, plus de la moitié des distracteurs était imparfaits. Le pouvoir de discrimination était considérablement affecté en cas de corrélation positive entre un distracteur ou plus et les résultatsde l’examen, mais la présence de distracteurs choisis par moins de 5 % des étudiants avait une influence minime sur ce pouvoir. Nos résultats indiquent que les examens à choix multiple peuvent être considérablement améliorés. Nous suggérons que les enseignants procèdent à une analyse des items et modifient les distracteurs qui compromettent le pouvoir de discrimination des items.


Sign in / Sign up

Export Citation Format

Share Document