scholarly journals On the Optimality of the Detection of Examinees With Aberrant Answer Changes

2017 ◽  
Vol 41 (5) ◽  
pp. 338-352 ◽  
Author(s):  
Dmitry I. Belov

In standardized multiple-choice testing, examinees may change their answers for various reasons. The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at many testing organizations. This article exploits a recent approach where the information about all previous answers is used only to partition administered items into two disjoint subtests: items where an AC occurred and items where an AC did not occur. Two optimal statistics are described, each measuring a difference in performance between these subtests, where the performance is estimated from the final responses. Answer-changing behavior was simulated, where realistic distributions of wrong-to-right, wrong-to-wrong, and right-to-wrong ACs were achieved under various conditions controlled by the following independent variables: type of test, amount of aberrancy, and amount of uncertainty. Results of computer simulations confirmed the theoretical constructs on the optimal power of both statistics and provided several recommendations for practitioners.

Diagnostica ◽  
2020 ◽  
Vol 66 (3) ◽  
pp. 147-157
Author(s):  
Martin Senkbeil ◽  
Jan Marten Ihme

Zusammenfassung. ICT Literacy legt eine performanzbasierte Erfassung mit simulierten und interaktiven Testaufgaben nahe. Der vorliegende Beitrag untersucht, ob mit Multiple-Choice (MC)-Aufgaben ein vergleichbares Konstrukt wie mit Simulationsaufgaben erfasst wird. Hierfür wurden die Testergebnisse zweier Instrumente aus aktuellen Large-Scale-Studien gegenübergestellt, die an N = 2 075 Jugendlichen erhoben wurden: der auf MC-Aufgaben basierende ICT Literacy-Test für Klasse 9 des Nationalen Bildungspanels (National Educational Panel Study, NEPS) und der simulationsbasierte Kompetenztest der internationalen Schulleistungsstudie ICILS 2013 (International Computer and Information Literacy Study). Die Analysen unterstützen die Gültigkeit der Konstruktinterpretation des MC-basierten Tests in NEPS. Im Sinne der konvergenten Evidenz korrelieren die MC-Aufgaben substanziell mit den computer- und simulationsbasierten Aufgaben in ICILS 2013 (.68 ≤  r ≤ .90). Weiterhin ergeben sich positive und für beide Tests vergleichbar hohe Korrelationen mit ICT-bezogenen Schülermerkmalen (z. B. Selbstwirksamkeit). Weiterführende Analysen zum Zusammenhang mit allgemeinen kognitiven Fähigkeiten zeigen zudem, dass ICT Literacy und kognitive Grundfähigkeiten distinkte Faktoren repräsentieren.


Author(s):  
Lisa K. Fazio ◽  
Elizabeth J. Marsh ◽  
Henry L. Roediger

2016 ◽  
Vol 4 (2) ◽  
Author(s):  
Imam Wibowo, M.Si ◽  
Rusma Patriansyah

The purpose of this research is to know the information about relationship between independent variables which are training (X1) and motivation (X2) with dependent variable which is employee performance (Y) of PT. Bakrie Pipe Industries Bekasi, both simultaneously and partially. Data collected from 78 employee of PT. Bakrie Pipe Industries Operating Division randomly using Slovin formula. Then all data was analyzed by using regression and statistical analysis using F-statistic to know the influence of independent variable simultaneously and t-statistic to know the influence of independent variable partially with using SPSS Ver. 22 software. The results of this research shown that: 1). Simultaneously, training and motivation have positive influence and significant to employee performance of PT. Bakrie Pipe Industries Bekasi. 2). Partially, training has positive influence and significant to employee performance of PT. Bakrie Pipe Industries Bekasi. 3). Partially, motivation has positive influence and significant to employee performance of PT. Bakrie Pipe Industries Bekasi.


Soft Matter ◽  
2021 ◽  
Author(s):  
Claudio Maggi ◽  
Matteo Paoluzzi ◽  
Andrea Crisanti ◽  
Emanuela Zaccarelli ◽  
Nicoletta Gnan

We perform large-scale computer simulations of an off-lattice two-dimensional model of active particles undergoing a motility-induced phase separation (MIPS) to investigate the systems critical behaviour close to the critical point...


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


Sign in / Sign up

Export Citation Format

Share Document