latent ability
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 16)

H-INDEX

15
(FIVE YEARS 1)

Psych ◽  
2021 ◽  
Vol 3 (3) ◽  
pp. 501-521
Author(s):  
Sebastian Gary ◽  
Wolfgang Lenhard ◽  
Alexandra Lenhard

In this article, we explain and demonstrate how to model norm scores with the cNORM package in R. This package is designed specifically to determine norm scores when the latent ability to be measured covaries with age or other explanatory variables such as grade level. The mathematical method used in this package draws on polynomial regression to model a three-dimensional hyperplane that smoothly and continuously captures the relation between raw scores, norm scores and the explanatory variable. By doing so, it overcomes the typical problems of classical norming methods, such as overly large age intervals, missing norm scores, large amounts of sampling error in the subsamples or huge requirements with regard to the sample size. After a brief introduction to the mathematics of the model, we describe the individual methods of the package. We close the article with a practical example using data from a real reading comprehension test.


2021 ◽  
Vol 7 (30) ◽  
pp. eabj1517
Author(s):  
Stuart K. Watson ◽  
Judith M. Burkart ◽  
Steven J. Schapiro ◽  
Susan P. Lambeth ◽  
Jutta L. Mueller ◽  
...  

Rawski et al. revisit our recent findings suggesting the latent ability to process nonadjacent dependencies (“Non-ADs”) in monkeys and apes. Specifically, the authors question the relevance of our findings for the evolution of human syntax. We argue that (i) these conclusions hinge upon an assumption that language processing is necessarily hierarchical, which remains an open question, and (ii) our goal was to probe the foundational cognitive mechanisms facilitating the processing of syntactic Non-ADs—namely, the ability to recognize predictive relationships in the input.


2021 ◽  
Author(s):  
Alexandra B. Bosshard ◽  
Maël M. Leroux ◽  
Nicholas A. Lester ◽  
Balthasar Bickel ◽  
Sabine Stoll ◽  
...  

Emerging data in a range of non-human animal species have highlighted a latent ability to combine certain pre-existing calls together into larger structures. Currently, however, there exists no objective quantification of call combinations. This is problematic because animal calls can co-occur with one another simply through chance alone. One common approach applied in language sciences to identify recurrent word combinations is collocation analysis. Through comparing the co-occurrence of two words with how each word combines with other words within a corpus, collocation analysis can highlight above chance, two-word combinations. Here, we demonstrate how this approach can also be applied to non-human animal communication systems by implementing it on a pseudo dataset. We argue collocation analysis represents a promising tool for identifying non-random, communicatively relevant call combinations in animals.


2021 ◽  
Author(s):  
John Protzko ◽  
Jan te Nijenhuis ◽  
Khaled Ziada ◽  
Hanaa Abdelazim Mohamed Metwaly ◽  
salaheldin Bakhiet

The One-Group Pretest-Posttest Design, where the same group of people is measured before and after some event, can be fraught with statistical problems and issues with causal inference. Still, these designs are common from political science to developmental neuropsychology to economics. In cases with cognitive data, it has long been known that a second test, with no treatment or an ineffective manipulation between testings, leads to increased scores at time 2 without an increase in the underlying latent ability. We investigate several analytic approaches involving both manifest and latent variable modeling to see which methods are able to accurately model manifest score changes with no latent change. Using data from 600 schoolchildren given an intelligence test twice, with no intervention between, we show using manifest test scores, either directly or through univariate latent change score analysis, falsely leads one to believe an underlying increase has occurred. Latent change score models on latent data also show a spurious significant effect on the underlying latent ability. Multigroup Confirmatory Factor Analysis only shows the correct answer when measurement invariance is tested, imposed (if viable), and the means of both time points are tested constricting time 2 to zero. Longitudinal structural equation modeling with measurement invariance correctly shows no change at the latent level when measurement invariance is tested, imposed, and model fit tested. When dealing with the One-Group Pretest-Posttest Design, analyses must occur at the latent level, measurement invariance tested, and change parameters explicitly tested. Otherwise, one may see change where none exists.


2021 ◽  
Author(s):  
Rasmus Persson

In multiple-choice tests, guessing is a source of test error which can be suppressed if its expected score is made negative by either penalizing wrong answers or rewarding expressions of partial knowledge. We consider an arbitrarymultiple-choice test taken by a rational test-taker that knows an arbitrary fraction of its keys and distractors. For this model, we compare the relation between the obtained score for standard marking (where guessing is not penalized), marking where guessing is suppressed either by expensive score penalties for incorrect answers or by marking schemes that reward partial knowledge. While the “best” scoring system (in the sense that latent ability and test score are linearly related) will depend on the underlying ability distribution, we find a superiority of the scoring rule of Zapechelnyuk (Economics Letters, 132, 2015) but, except for item-level discrimination among test-takers, a single penalty for wrong answers seems to yield just as good or better results as more intricate schemes with partial credit.


2020 ◽  
pp. 107699862097280
Author(s):  
Shiyu Wang ◽  
Houping Xiao ◽  
Allan Cohen

An adaptive weight estimation approach is proposed to provide robust latent ability estimation in computerized adaptive testing (CAT) with response revision. This approach assigns different weights to each distinct response to the same item when response revision is allowed in CAT. Two types of weight estimation procedures, nonfunctional and functional weight, are proposed to determine the weight adaptively based on the compatibility of each revised response with the assumed statistical model in relation to remaining observations. The application of this estimation approach to a data set collected from a large-scale multistage adaptive testing demonstrates the capability of this method to reveal more information regarding the test taker’s latent ability by using the valid response path compared with only using the very last response. Limited simulation studies were concluded to evaluate the proposed ability estimation method and to compare it with several other estimation procedures in literature. Results indicate that the proposed ability estimation approach is able to provide robust estimation results in two test-taking scenarios.


2020 ◽  
Vol 18 (4) ◽  
pp. 215-241
Author(s):  
Tugba Karadavut ◽  
Allan S. Cohen ◽  
Seock-Ho Kim
Keyword(s):  

2020 ◽  
Author(s):  
Kimmo Sorjonen ◽  
Guy Madison ◽  
Bo Melin

It has been demonstrated that the worst performance rule (WPR) effect can occur as a result of statistical dependencies in the data. Here, we examine whether this might also be the case for Spearman’s law of diminishing returns (SLODR). Two proposed SLODR criteria are the skewness of the estimated latent ability factor and the correlation between this latent ability and within-individual residual variance. Using four publicly available datasets, covering quite different dimensions of behavior, we show that both these criteria are affected by the correlation between within-individual average performance and variance on the test scores. However, the influence of this correlation on the two criteria goes in opposite directions, which suggests that it generally might be difficult to get results that unambiguously support SLODR.


2020 ◽  
Author(s):  
Peida Zhan ◽  
Xin Qiao

Process data refers to data recorded by computer-based assessments (CBA) that reflect respondents’ problem-solving processes and provide greater insight into how students solve problems, instead of merely how well they solve them. Using the rich information contained in process data, this study proposed an item-specific psychometric method for analyzing process data in order to comprehensively understand respondents’ problem-solving competence. By incorporating diagnostic classification into process data analysis, the proposed method cannot only estimate respondents’ problem-solving ability along a continuum, but can also classify respondents according to their problem-solving strategies. To illustrate the application and advantages of the proposed method, a Programme for International Student Assessment (PISA) problem-solving task was used. The results indicated that (a) the estimated latent classes provided more detailed diagnoses of respondents’ problem-solving strategies than the observed score classes; (b) although only one item was used, estimated higher-order latent ability reflected the respondents’ problem-solving ability more accurately than the estimated unidimensional latent ability taken from the outcome data; and (c) the interactions between problem-solving skills may follow the conjunctive condensation rule, which assumes that only when a respondent has mastered all the required problem-solving skills can the specific action sequence appear. Overall, the main conclusion drawn from this study was that using diagnostic classification is a feasible and promising method for analyzing process data.


Sign in / Sign up

Export Citation Format

Share Document