interrater reliability coefficients
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 3)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Alexandru Ioan Manea ◽  
Dragos Iliescu

In this paper we detail the construction and validation process for a new personality-oriented work analysis instrument, in the form of a standardized questionnaire, based on extant research that shows that personality traits are good predictors of job performance. We present the process of item development, frame of reference training, rating scale creation, and the selection of subject matter experts. By administering the instrument to three distinct positions, the interrater reliability coefficients resulted between .80 and .94. We also investigated the instrument’s ability to discriminate between the same rated positions, and the results for this indicator were quite low. Conclusions provide some possible explanations for the lower resulted discriminability. Practical and theoretical implications are discussed as well as other future research for general improvement of data quality.


Pharmacy ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 162
Author(s):  
Nour Hamada ◽  
Patricia Quintana Bárcena ◽  
Karen Alexandra Maes ◽  
Olivier Bugnon ◽  
Jérôme Berger

Documentation of community pharmacists’ clinical activities, such as the identification and management of drug-related problems (DRPs), is recommended. However, documentation is not systematic in Swiss community pharmacies, and relevant information about DRPs, such as consequences or involved partners, is frequently missing. This study aims to evaluate the interrater and test-retest reliability, appropriateness and acceptability of the Clinical Pharmacy Activities Documented (ClinPhADoc) tool. Ten community pharmacists participated in the study. Interrater reliability coefficients were computed using 24 standardized cases. One month later, test-retest reliability was assessed using 10 standardized cases. To assess the appropriateness, pharmacists were asked to document clinical activities in their own practice using ClinPhADoc. Acceptability was assessed by an online satisfaction survey. Kappa coefficients showing a moderate level of agreement (>0.40) were observed for interrater and test-retest reliability. Pharmacists were able to document 131 clinical activities. The good level of acceptability and brief documentation time (fewer than seven minutes) indicate that ClinPhADoc is well-suited to the community pharmacy setting. To optimize the tool, pharmacists proposed developing an electronic version. These results support the reliability and acceptance of the ClinPhADoc tool.


2004 ◽  
Vol 84 (1) ◽  
pp. 8-22 ◽  
Author(s):  
Mindy F Levin ◽  
Johanne Desrosiers ◽  
Danielle Beauchemin ◽  
Nathalie Bergeron ◽  
Annie Rochette

Abstract Background and Purpose. Recent movement analysis studies have described compensatory movement strategies used by people with hemiparesis secondary to stroke during reaching and grasping tasks. The purpose of this article is to describe the development of a new scale—the Reaching Performance Scale (RPS)—for assessing compensatory movements for upper-extremity reaching in people with hemiparesis secondary to stroke. Subjects. Twenty-eight individuals with hemiparesis, with a mean age of 54.9 years (SD=18.6), participated. Methods. The study design involved scale development with expert panels and criterion standards for validity. Participants were evaluated on the new scale as well as other clinical tests for validity. They were videotaped while performing reaching and grasping movements. Results. The RPS scores correlated with measurements of grip force and Chedoke-McMaster Stroke Assessment and Upper Extremity Performance Test for the Elderly (TEMPA) scores. The RPS discriminated patients with different impairment levels according to the Chedoke-McMaster Stroke Assessment. Preliminary intrarater and interrater reliability coefficients were acceptable for the whole scale. Mean kappa values on individual scale components for 3 raters represented a mean of 67% (SD=13.5%) agreement. Discussion and Conclusion. Although the RPS shows some types of validity, more rigorous tests of reliability are needed for meaningful conclusions. This study is a first step in validating the scale to assess efficacy of intervention for motor recovery of the arm.


Assessment ◽  
2003 ◽  
Vol 10 (1) ◽  
pp. 66-70 ◽  
Author(s):  
Michael N. Lopez ◽  
Michael D. Lazar ◽  
Sindy Oh

The psychometric properties of the Hooper Visual Organization Test (VOT) have not been well investigated. Here the authors present internal consistency and interrater reliability coefficients, and an item analysis, using data from a sample ( N = 281) of "cognitively impaired" and "cognitively intact" patients, and patients with undetermined cognitive status. Coefficient alpha for the VOT total sample was .882. An item analysis found that 26 of the 30 items were good at discriminating among patients. Also, the interrater reliabilities for three raters (.992), two raters (.988), and one rater (.977) were excellent. Therefore, the judgmental scoring of the VOT does not interfere significantly with its clinical utility. The authors conclude that the VOT is a psychometrically sound test.


2002 ◽  
Vol 18 (1) ◽  
pp. 52-62 ◽  
Author(s):  
Olga F. Voskuijl ◽  
Tjarda van Sliedregt

Summary: This paper presents a meta-analysis of published job analysis interrater reliability data in order to predict the expected levels of interrater reliability within specific combinations of moderators, such as rater source, experience of the rater, and type of job descriptive information. The overall mean interrater reliability of 91 reliability coefficients reported in the literature was .59. The results of experienced professionals (job analysts) showed the highest reliability coefficients (.76). The method of data collection (job contact versus job description) only affected the results of experienced job analysts. For this group higher interrater reliability coefficients were obtained for analyses based on job contact (.87) than for those based on job descriptions (.71). For other rater categories (e.g., students, organization members) neither the method of data collection nor training had a significant effect on the interrater reliability. Analyses based on scales with defined levels resulted in significantly higher interrater reliability coefficients than analyses based on scales with undefined levels. Behavior and job worth dimensions were rated more reliable (.62 and .60, respectively) than attributes and tasks (.49 and .29, respectively). Furthermore, the results indicated that if nonprofessional raters are used (e.g., incumbents or students), at least two to four raters are required to obtain a reliability coefficient of .80. These findings have implications for research and practice.


1993 ◽  
Vol 77 (3_suppl) ◽  
pp. 1215-1218 ◽  
Author(s):  
Susan Dickerson Mayes ◽  
Edward O. Bixler

Agreement between raters using global impressions to assess methylphenidate response was analyzed for children with Attention-Deficit Hyperactivity Disorder (ADHD) undergoing double-blind, placebo-controlled, crossover methylphenidate trials. Caregivers were more likely to disagree than agree when asked to rate the children as “better, same, or worse” during each day of the trial. Over-all agreement was 42.9%, only 9.6% above what would be expected based on chance alone. Further, none of the interrater reliability coefficients (Cohen's kappa) for the individual children were statistically significant.


1992 ◽  
Vol 74 (2) ◽  
pp. 347-353 ◽  
Author(s):  
Elizabeth M. Mason

The purpose of this study was to investigate the interrater reliability of the visual-motor portion of the Copying subtest of the Stanford-Binet Intelligence Scale: Fourth Edition. Eight raters independently scored 11 protocols completed by children aged 5 through 10 years, using the scoring criteria and guidelines in the manual. The raters marked each of 10 items pass or fail and computed a total raw score for each protocol. Interrater reliability coefficients were obtained for each child's protocol, and the Kappa coefficient was computed for each item. Significant raters' reliability coefficients ranged from .82 to .91, which were low in comparison to test-retest reliability and Kuder-Richardson-20 coefficients for this and other subtests of the Stanford-Binet in the technical manual. Percent agreement among 8 raters also indicated weak reliability. Although the obtained results suggested some interrater reliability coefficients within acceptable levels, questions were raised about the scoring criteria for individual items. Caution is warranted in the use of cognitive measures which include subjective judgement of the examiner in applying scoring criteria.


Sign in / Sign up

Export Citation Format

Share Document