psychometric modeling
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 10)

H-INDEX

9
(FIVE YEARS 0)

Psych ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 673-693
Author(s):  
Shenghai Dai

The presence of missing responses in assessment settings is inevitable and may yield biased parameter estimates in psychometric modeling if ignored or handled improperly. Many methods have been proposed to handle missing responses in assessment data that are often dichotomous or polytomous. Their applications remain nominal, however, partly due to that (1) there is no sufficient support in the literature for an optimal method; (2) many practitioners and researchers are not familiar with these methods; and (3) these methods are usually not employed by psychometric software and missing responses need to be handled separately. This article introduces and reviews the commonly used missing response handling methods in psychometrics, along with the literature that examines and compares the performance of these methods. Further, the use of the TestDataImputation package in R is introduced and illustrated with an example data set and a simulation study. Corresponding R codes are provided.


2021 ◽  
Author(s):  
Víthor Rosa Franco ◽  
Jacob Arie Laros ◽  
Marie Wiberg

The aim of the current study is to present three assumptions common to psychometric theory and psychometric practice, and to show how alternatives to traditional psychometrical approaches can be used to improve psychological measurement. These alternatives are developed by adapting each of these three assumptions. The assumption of structural validity relates to the implementation of mathematical models. The process assumption which is underlying process generates the observed data. The construct assumption implies that the observed data on its own do not constitute a measurement, but the latent variable that originates the observed data. Nonparametric item response modeling and cognitive psychometric modeling are presented as alternatives for relaxing the first two assumptions, respectively. Network psychometrics is the alternative for relaxing the third assumption. Final remarks sum up the most important conclusions of the study.


2020 ◽  
Author(s):  
Rachel F. Sussman ◽  
Mercedes B. Villalonga ◽  
Robert Sekuler

It is important to understand the perceptual limits on vibrotactile information-processing because of the increasing use of vibrotactile signals in common technologies like cell phones. To advance such an understanding, we examined vibrotactile temporal acuity and compared it to auditory and bimodal (synchronous vibrotactile and auditory) temporal acuity. In a pair of experiments, subjects experienced a series of empty intervals, demarcated by stimulus pulses from one of the three modalities. One trial contained up to 5 intervals, where the first intervals were isochronous at 400 ms, and the last interval varied from 400 by ±1-80 ms. If the final interval was < 400 ms, the last pulse seemed “early”, and if the final interval was > 400 ms, the last pulse seemed “late”. In Experiment One, each trial contained four intervals, where the first three were isochronous. Subjects judged the timing of the last interval by describing the final pulse as either “early” or “late”. In Experiment Two, the number of isochronous intervals in a trial varied from one to four. Psychometric modeling revealed that vibrotactile temporal processing was less acute than auditory or bimodal temporal processing, and that auditory inputs dominated bimodal perception. Additionally, varying the number of isochronous intervals did not affect temporal sensitivity in either modality, suggesting the formation of memory traces. Overall, these results suggest that vibrotactile temporal processing is worse than auditory or bimodal temporal processing, which are similar. Also, subjects need no more than one isochronous reminder per trial for optimal performance.


2019 ◽  
Vol 10 ◽  
Author(s):  
Rodrigo Schames Kreitchmann ◽  
Francisco J. Abad ◽  
Vicente Ponsoda ◽  
Maria Dolores Nieto ◽  
Daniel Morillo

2019 ◽  
Vol 44 (6) ◽  
pp. 648-670
Author(s):  
Andreas Oranje ◽  
Andrew Kolstad

The design and psychometric methodology of the National Assessment of Educational Progress (NAEP) is constantly evolving to meet the changing interests and demands stemming from a rapidly shifting educational landscape. NAEP has been built on strong research foundations that include conducting extensive evaluations and comparisons before new approaches are adopted. During those evaluations, many lessons are learned and discoveries surface that do not often find their way into widely accessible outlets. This article discusses a number of those insights with the goal to provide an integrated and accessible perspective on the strengths and limitations of NAEP’s psychometric methodology and statistical reporting practices. Drawing from a range of technical reports and memoranda, presentations, and published literature, the following topics are covered: calibration, estimation of proficiency, data reduction, standard error estimation, statistical inference, and standard setting.


Sign in / Sign up

Export Citation Format

Share Document