Simulation study of a “fission electron-collection” neutron detector

2015 ◽  
Vol 73 ◽  
pp. 46-50 ◽  
Author(s):  
Dong Wang ◽  
Chuanfei Zhang ◽  
Jianhua Zhang
2016 ◽  
Vol 33 (5) ◽  
pp. 052901
Author(s):  
Dong Wang ◽  
Chuan-Fei Zhang ◽  
Bo-Jun Li ◽  
Yi-Ping Cai ◽  
Xue-Bin Zhu ◽  
...  

2020 ◽  
Vol 158 ◽  
pp. 111717
Author(s):  
Dong Wang ◽  
Jianhua Zhang ◽  
Fenni Si ◽  
Xingyu Peng ◽  
Qingyuan Hu ◽  
...  

2006 ◽  
Vol 11 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Alexander von Eye

At the level of manifest categorical variables, a large number of coefficients and models for the examination of rater agreement has been proposed and used. The most popular of these is Cohen's κ. In this article, a new coefficient, κ s , is proposed as an alternative measure of rater agreement. Both κ and κ s allow researchers to determine whether agreement in groups of two or more raters is significantly beyond chance. Stouffer's z is used to test the null hypothesis that κ s = 0. The coefficient κ s allows one, in addition to evaluating rater agreement in a fashion parallel to κ, to (1) examine subsets of cells in agreement tables, (2) examine cells that indicate disagreement, (3) consider alternative chance models, (4) take covariates into account, and (5) compare independent samples. Results from a simulation study are reported, which suggest that (a) the four measures of rater agreement, Cohen's κ, Brennan and Prediger's κ n , raw agreement, and κ s are sensitive to the same data characteristics when evaluating rater agreement and (b) both the z-statistic for Cohen's κ and Stouffer's z for κ s are unimodally and symmetrically distributed, but slightly heavy-tailed. Examples use data from verbal processing and applicant selection.


Sign in / Sign up

Export Citation Format

Share Document