Validity of the American Board of Orthodontics Discrepancy Index and the Peer Assessment Rating Index for comprehensive evaluation of malocclusion severity

2017 ◽  
Vol 20 (3) ◽  
pp. 140-145 ◽  
Author(s):  
S. Liu ◽  
H. Oh ◽  
D. W. Chambers ◽  
S. Baumrind ◽  
T. Xu
2017 ◽  
Vol 40 (2) ◽  
pp. 157-163 ◽  
Author(s):  
Siqi Liu ◽  
Heesoo Oh ◽  
David William Chambers ◽  
Tianmin Xu ◽  
Sheldon Baumrind

2021 ◽  
Vol 10 (8) ◽  
pp. 1646
Author(s):  
Arwa Gera ◽  
Shadi Gera ◽  
Michel Dalstra ◽  
Paolo M. Cattaneo ◽  
Marie A. Cornelis

The aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatment stages by two operators using two methods: first, digitally, on direct digital models using Ortho Analyzer software; and second, manually, on printed model equivalents using a digital caliper. All measurements were repeated at a one-week interval. Paired sample t-tests were used to compare PAR scores and its components between both methods and raters. Intra-class correlation coefficients (ICC) were used to compute intra- and inter-rater reproducibility. The error of the method was calculated. The agreement between both methods was analyzed using Bland-Altman plots. There were no significant differences in the mean PAR scores between both methods and both raters. ICC for intra- and inter-rater reproducibility was excellent (≥0.95). All error-of-the-method values were smaller than the associated minimum standard deviation. Bland-Altman plots confirmed the validity of the measurements. PAR scoring on digital models showed excellent validity and reproducibility compared with manual scoring on printed model equivalents by means of a digital caliper.


Sign in / Sign up

Export Citation Format

Share Document