An Alternative to Cohen's κ

2006 ◽  
Vol 11 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Alexander von Eye

At the level of manifest categorical variables, a large number of coefficients and models for the examination of rater agreement has been proposed and used. The most popular of these is Cohen's κ. In this article, a new coefficient, κ s , is proposed as an alternative measure of rater agreement. Both κ and κ s allow researchers to determine whether agreement in groups of two or more raters is significantly beyond chance. Stouffer's z is used to test the null hypothesis that κ s = 0. The coefficient κ s allows one, in addition to evaluating rater agreement in a fashion parallel to κ, to (1) examine subsets of cells in agreement tables, (2) examine cells that indicate disagreement, (3) consider alternative chance models, (4) take covariates into account, and (5) compare independent samples. Results from a simulation study are reported, which suggest that (a) the four measures of rater agreement, Cohen's κ, Brennan and Prediger's κ n , raw agreement, and κ s are sensitive to the same data characteristics when evaluating rater agreement and (b) both the z-statistic for Cohen's κ and Stouffer's z for κ s are unimodally and symmetrically distributed, but slightly heavy-tailed. Examples use data from verbal processing and applicant selection.

2020 ◽  
pp. 107699862095742
Author(s):  
Sandip Sinharay ◽  
Matthew S. Johnson

Score differencing is one of the six categories of statistical methods used to detect test fraud (Wollack & Schoenig, 2018) and involves the testing of the null hypothesis that the performance of an examinee is similar over two item sets versus the alternative hypothesis that the performance is better on one of the item sets. We suggest, to perform score differencing, the use of the posterior probability of better performance on one item set compared to another. In a simulation study, the suggested approach performs satisfactory compared to several existing approaches for score differencing. A real data example demonstrates how the suggested approach may be effective in detecting fraudulent examinees. The results in this article call for more attention to the use of posterior probabilities, and Bayesian approaches in general, in investigations of test fraud.


2019 ◽  
Vol 11 (6) ◽  
pp. 65
Author(s):  
Jing Li ◽  
Xueyan Li

The paper considers the problem of testing error serial correlation of partially linear additive measurement error model. We propose a test statistic and show that it converges to the standard chi-square distribution under the null hypothesis. Finally, a simulation study is conducted to illustrate the performance of the test approach.


2021 ◽  
Author(s):  
Stefan Bode ◽  
Daniel Feuerriegel ◽  
Elektra Schubert ◽  
Hinze Hogendoorn

Multivariate classification analysis for non-invasively acquired neuroimaging data is a powerful tool in cognitive neuroscience research. However, an important constraint of such pattern classifiers is that they are restricted to predicting categorical variables (i.e. assigning trials to classes). Here, we present an alternative approach, Support Vector Regression (SVR), which uses single-trial neuroimaging (e.g., EEG or MEG) data to predict a continuous variable of interest such as response time, response force, or any kind of subjective rating (e.g., emotional state, confidence, etc.). We describe how SVR can be used, how it is implemented in the Decision Decoding Toolbox (DDTBOX), and how it has been used in previous research. We then report results from two simulation studies, designed to closely resemble real EEG data, in which we predicted a continuous variable of interest across a range of analysis parameters. In Simulation Study 1, we observed that SVR was effective for analysis windows ranging from 2 ms - 100 ms, and that it was relatively unaffected by temporal averaging. In Simulation Study 2, we showed that prediction was still successful when only a small number of channels encoded information about the output variable, and that it was robust to temporal jitter regarding when that information was present in the EEG. Finally, we reanalysed a previously published dataset of similar size and observed highly comparable results in real EEG data. We conclude that linear SVR is a powerful tool for the investigation of single-trial EEG data in relation to continuous and more nuanced variables, which are not well-captured using classification approaches requiring distinct classes.


2021 ◽  
Vol 55 (1) ◽  
pp. 15-28
Author(s):  
Amina Bari ◽  
Abdelaziz Rassoul ◽  
Hamid Ould Rouis

In the present paper, we define and study one of the most popular indices which measures the inequality of capital incomes, known as the Gini index. We construct a semiparametric estimator for the Gini index in case of heavy-tailed income distributions and we establish its asymptotic distribution and derive bounds of confidence. We explore the performance of the confidence bounds in a simulation study and draw conclusions about capital incomes in some income distributions.


2011 ◽  
Vol 35 (2) ◽  
pp. 180-190 ◽  
Author(s):  
Rens van de Schoot ◽  
Dagmar Strohmeier

In the present paper, the application of a parametric bootstrap procedure, as described by van de Schoot, Hoijtink, and Deković (2010), will be applied to demonstrate that a direct test of an informative hypothesis offers more informative results compared to testing traditional null hypotheses against catch-all rivals. Also, more power can be gained when informative hypotheses are tested directly. In this paper we will (a) compare the results of traditional analyses with the results of this novel methodology; (b) introduce applied researchers to the parametric bootstrap procedure for the evaluation of informative hypotheses; and (c) provide the results of a simulation study to demonstrate power gains when using inequality constraints. We argue that researchers should directly evaluate inequality-constrained hypotheses if there is a strong theory about the ordering of relevant parameters. In this way, researchers can make use of all knowledge available from previous investigations, while also learning more from their data compared to traditional null-hypothesis testing.


2019 ◽  
Vol 18 (03) ◽  
pp. 1950016
Author(s):  
Ferdos Gorji ◽  
Mina Aminghafari

This study focuses on heavy-tailed noise reduction in multivariate signals, with no knowledge of their forms. We propose a non-parametric multivariate denoising technique which is robust to heavy-tailed noise. Using a univariate robust linear regression, we construct a multivariate non-parametric method. We design a robust matrix decomposition and, consequently, propose a robust procedure including this new decomposition. In addition, we develop a robust procedure for the imputation of the missing points of the signals. The key advantage of our methods over the previous tools is the robustness to the heavy-tailed observations. The results of our simulation study confirm the good performance of the proposed methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-20
Author(s):  
Zubair Ahmad ◽  
Eisa Mahmoudi ◽  
Omid Kharazmi

Heavy-tailed distributions play an important role in modeling data in actuarial and financial sciences. In this article, a new method is suggested to define new distributions suitable for modeling data with a heavy right tail. The proposed method may be named as the Z-family of distributions. For illustrative purposes, a special submodel of the proposed family, called the Z-Weibull distribution, is considered in detail to model data with a heavy right tail. The method of maximum likelihood estimation is adopted to estimate the model parameters. A brief Monte Carlo simulation study for evaluating the maximum likelihood estimators is done. Furthermore, some actuarial measures such as value at risk and tail value at risk are calculated. A simulation study based on these actuarial measures is also done. An application of the Z-Weibull model to the earthquake insurance data is presented. Based on the analyses, we observed that the proposed distribution can be used quite effectively in modeling heavy-tailed data in insurance sciences and other related fields. Finally, Bayesian analysis and performance of Gibbs sampling for the earthquake data have also been carried out.


Author(s):  
Ben Dahmane Khanssa

Inspired by L.Peng’s work on estimating the mean of heavy-tailed distribution in the case of completed data. we propose an alternative estimator and study its asymptotic normality when it comes to the right truncated random variable. A simulation study is executed to evaluate the finite sample behavior on the proposed estimator


2017 ◽  
Vol 40 (1) ◽  
pp. 45-64 ◽  
Author(s):  
Fatma Zehra Doğru ◽  
Olcay Arslan

In this study, we propose a robust mixture regression procedure based on the skew t distribution to model heavy-tailed and/or skewed errors in a mixture regression setting. Using the scale mixture representation of the skew  t distribution, we give an Expectation Maximization (EM) algorithm to compute the maximum likelihood (ML) estimates for the paramaters of interest. The performance of proposed estimators is demonstrated by a simulation study and a real data example.


2013 ◽  
Vol 40 (7) ◽  
pp. 1506-1519 ◽  
Author(s):  
Y. Sertdemir ◽  
H. R. Burgut ◽  
Z. N. Alparslan ◽  
I. Unal ◽  
S. Gunasti

Sign in / Sign up

Export Citation Format

Share Document