constructed response
Recently Published Documents


TOTAL DOCUMENTS

242
(FIVE YEARS 53)

H-INDEX

20
(FIVE YEARS 3)

2021 ◽  
Vol 6 ◽  
Author(s):  
Xiaoming Zhai ◽  
Kevin C. Haudek ◽  
Christopher Wilson ◽  
Molly Stuhlsatz

Estimating and monitoring the construct-irrelevant variance (CIV) is of significant importance to validity, especially for constructed response assessments with rich contextualized information. To examine CIV in contextualized constructed response assessments, we developed a framework including a model accounting for CIV and a measurement that could differentiate the CIV. Specifically, the model includes CIV due to three factors: the variability of assessment item scenarios, judging severity, and rater scoring sensitivity to the scenarios in tasks. We proposed using the many-facet Rasch measurement (MFRM) to examine the CIV because this measurement model can compare different CIV factors on a shared scale. To demonstrate how to apply this framework, we applied the framework to a video-based science teacher pedagogical content knowledge (PCK) assessment, including two tasks, each with three scenarios. Results for task I, which assessed teachers’ analysis of student thinking, indicate that the CIV due to the variability of the scenarios was substantial, while the CIV due to judging severity and rater scoring sensitivity of the scenarios in teacher responses was not. For task II, which assessed teachers’ analysis of responsive teaching, results showed that the CIV due to the three proposed factors was all substantial. We discuss the conceptual and methodological contributions, and how the results inform item development.


2021 ◽  
Vol 8 (4) ◽  
pp. 349-360
Author(s):  
Leiv Opstad

The discussion of whether multiple-choice questions can replace the traditional exam with essays and constructed questions in introductory courses has just started in Norway. There is not an easy answer. The findings depend on the pattern of the questions. Therefore, one must be careful in drawing conclusions. In this research, one will explore a selected business course where 30 percent of the test is comprised of multiple-choice items. There obviously are some similarities between the two test methods. Students who perform well on writing essays tend also to achieve good results when answering multiple- choice questions. The result reveals a gender gap where multiple-choice based exam seems to favor the male students. There are some challenges in how to measure the different dimensions of knowledge. This study confirms this. Hence, it is too early to conclude that a multiple-choice score is a good predictor of the outcome of an essay exam. This paper will provide a beneficial contribution to the debate in Norway, but it needs to be followed up with more research. Keywords: multiple choice test, constructed response questions, business school, gender, regression model.


2021 ◽  
Vol 31 (64) ◽  
pp. 1-20
Author(s):  
Jéssica Harume Dias Muto ◽  
Lidia Maria Marson Postalli ◽  
Maria Manuela Pires Sanches Fernandes Ferreira

O Programa de Ensino Individualizado (PEI) é um documento que descreve as medidas de aprendizagem por meio de orientações e desenvolvimento sistematizado de ensino para alunos que necessitam de suporte no processo de ensino e aprendizagem, no qual as metas devem atender às necessidades e singularidades do aluno, beneficiando o processo de inclusão. O presente trabalho teve como objetivo analisar o PEI de um aluno com deficiência intelectual matriculado em uma escola regular de Portugal, de modo a eleger um dos objetivos a ele propostos na área da leitura e escrita e elaborar uma intervenção que atendesse às suas necessidades, avaliando seu desempenho continuamente. Neste trabalho, realizou-se o ensino de reconhecimento de 10 palavras iniciadas com a letra P, ensinadas duas a duas, empregando-se os procedimentos de emparelhamento com o modelo (matching-to-sample [MTS]) e emparelhamento com o modelo com resposta construída (constructed response matching to sample [CRMTS]). Os resultados mostraram que o aluno apresentou avanços parciais no desempenho de habilidades alvo e não houve manutenção na avaliação após um mês. Os dados indicaram que o investimento no planejamento de ensino sistematizado e adequado torna-se de extrema relevância para o processo de ensino e aprendizado e o trabalho multidisciplinar.


Author(s):  
Stefan K. Schauber ◽  
Stefanie C. Hautz ◽  
Juliane E. Kämmer ◽  
Fabian Stroben ◽  
Wolf E. Hautz

AbstractThe use of response formats in assessments of medical knowledge and clinical reasoning continues to be the focus of both research and debate. In this article, we report on an experimental study in which we address the question of how much list-type selected response formats and short-essay type constructed response formats are related to differences in how test takers approach clinical reasoning tasks. The design of this study was informed by a framework developed within cognitive psychology which stresses the importance of the interplay between two components of reasoning—self-monitoring and response inhibition—while solving a task or case. The results presented support the argument that different response formats are related to different processing behavior. Importantly, the pattern of how different factors are related to a correct response in both situations seem to be well in line with contemporary accounts of reasoning. Consequently, we argue that when designing assessments of clinical reasoning, it is crucial to tap into the different facets of this complex and important medical process.


Author(s):  
O. Fedorov ◽  
K. Verinchuk

This paper investigates the effect of сonstructed-response items in the Unified State Exam (ESE) in History on exam’s validity and the threats to validity. The Unified State Exam is the primary high-stakes examination for Russian students. Despite playing a vital role as an achievement and an admission test, this exam’s validity has not been looked into. The evolution of this exam is distinctly marked by a growing change in the number and weight of constructed-response items, which might be affecting the validity of test results in many ways. The research was focused on interviews with 36 history experts. Thematic analysis of transcripts helped to identify three main threats to validity: faulty criteria, task content and expert bias. The paper presents these results along with recommendations on improving the test.


Author(s):  
Jenny Robins ◽  
Juna Snow

In 1998, the American Association of School Librarians (AASL) developed nine standards for information literacy skills. Students with these skills are equipped to recognize their learning objectives, identify their information needs, acquire information, evaluate information, and share the results of their effort. These skills are keys to lifelong learning. Standard assessment tools, such as select response, closed-constructed response, and even open-ended-constructed response questions are sufficiently dynamic to align with the real-world experiences of learners exercising information literacy skills. In this study, an information structure was designed for students to use to describe learning activities. These written, student-generated items become part of a student’s portfolio. It is proposed that this information structure can serve as an alternative, authentic tool to assess students’ information literacy skills. Two student portfolio items are presented in this report along with a description of the process used to create assessments.


2021 ◽  
Vol 11 ◽  
Author(s):  
H.-J. Choi ◽  
Seohyun Kim ◽  
Allan S. Cohen ◽  
Jonathan Templin ◽  
Yasemin Copur-Gencturk

Selected response items and constructed response (CR) items are often found in the same test. Conventional psychometric models for these two types of items typically focus on using the scores for correctness of the responses. Recent research suggests, however, that more information may be available from the CR items than just scores for correctness. In this study, we describe an approach in which a statistical topic model along with a diagnostic classification model (DCM) was applied to a mixed item format formative test of English and Language Arts. The DCM was used to estimate students’ mastery status of reading skills. These mastery statuses were then included in a topic model as covariates to predict students’ use of each of the latent topics in their written answers to a CR item. This approach enabled investigation of the effects of mastery status of reading skills on writing patterns. Results indicated that one of the skills, Integration of Knowledge and Ideas, helped detect and explain students’ writing patterns with respect to students’ use of individual topics.


Sign in / Sign up

Export Citation Format

Share Document