Panel Ratings of Tape-Recorded Interview Responses: Interrater Reliability? Racial Differences?

2005 ◽  
Author(s):  
Patrick F. McKay ◽  
John R. Curtis ◽  
David J. Snyder ◽  
Robert C. Satterwhite
2020 ◽  
Vol 63 (12) ◽  
pp. 3974-3981
Author(s):  
Ashwini Joshi ◽  
Isha Baheti ◽  
Vrushali Angadi

Aim The purpose of this study was to develop and assess the reliability of a Hindi version of the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V). Reliability was assessed by comparing Hindi CAPE-V ratings with English CAPE-V ratings and by the Grade, Roughness, Breathiness, Asthenia and Strain (GRBAS) scale. Method Hindi sentences were created to match the phonemic load of the corresponding English CAPE-V sentences. The Hindi sentences were adapted for linguistic content. The original English and adapted Hindi CAPE-V and GRBAS were completed for 33 bilingual individuals with normal voice quality. Additionally, the Hindi CAPE-V and GRBAS were completed for 13 Hindi speakers with disordered voice quality. The agreement of CAPE-V ratings was assessed between language versions, GRBAS ratings, and two rater pairs (three raters in total). Pearson product–moment correlation was completed for all comparisons. Results A strong correlation ( r > .8, p < .01) was found between the Hindi CAPE-V scores and the English CAPE-V scores for most variables in normal voice participants. A weak correlation was found for the variable of strain ( r < .2, p = .400) in the normative group. A strong correlation ( r > .6, p < .01) was found between the overall severity/grade, roughness, and breathiness scores in the GRBAS scale and the CAPE-V scale in normal and disordered voice samples. Significant interrater reliability ( r > .75) was present in overall severity and breathiness. Conclusions The Hindi version of the CAPE-V demonstrates good interrater reliability and concurrent validity with the English CAPE-V and the GRBAS. The Hindi CAPE-V can be used for the auditory-perceptual voice assessment of Hindi speakers.


1991 ◽  
Vol 34 (5) ◽  
pp. 989-999 ◽  
Author(s):  
Stephanie Shaw ◽  
Truman E. Coggins

This study examines whether observers reliably categorize selected speech production behaviors in hearing-impaired children. A group of experienced speech-language pathologists was trained to score the elicited imitations of 5 profoundly and 5 severely hearing-impaired subjects using the Phonetic Level Evaluation (Ling, 1976). Interrater reliability was calculated using intraclass correlation coefficients. Overall, the magnitude of the coefficients was found to be considerably below what would be accepted in published behavioral research. Failure to obtain acceptably high levels of reliability suggests that the Phonetic Level Evaluation may not yet be an accurate and objective speech assessment measure for hearing-impaired children.


2006 ◽  
Vol 175 (4S) ◽  
pp. 45-46
Author(s):  
Jacob H. Cohen ◽  
Victor J. Schoenbach ◽  
Jay S. Kaufman ◽  
James A. Talcott ◽  
Paul A. Godley

2006 ◽  
Vol 175 (4S) ◽  
pp. 68-69
Author(s):  
Nitya Abraham ◽  
Fei Wan ◽  
Chantal Montagnet ◽  
Yu-Ning Wong ◽  
Katrina Armstrong

GeroPsych ◽  
2014 ◽  
Vol 27 (1) ◽  
pp. 23-31 ◽  
Author(s):  
Anne Kuemmel (This author contributed eq ◽  
Julia Haberstroh (This author contributed ◽  
Johannes Pantel

Communication and communication behaviors in situational contexts are essential conditions for well-being and quality of life in people with dementia. Measuring methods, however, are limited. The CODEM instrument, a standardized observational communication behavior assessment tool, was developed and evaluated on the basis of the current state of research in dementia care and social-communicative behavior. Initially, interrater reliability was examined by means of videoratings (N = 10 people with dementia). Thereupon, six caregivers in six German nursing homes observed 69 residents suffering from dementia and used CODEM to rate their communication behavior. The interrater reliability of CODEM was excellent (mean κ = .79; intraclass correlation = .91). Statistical analysis indicated that CODEM had excellent internal consistency (Cronbach’s α = .95). CODEM also showed excellent convergent validity (Pearson’s R = .88) as well as discriminant validity (Pearson’s R = .63). Confirmatory factor analysis verified the two-factor solution of verbal/content aspects and nonverbal/relationship aspects. With regard to the severity of the disease, the content and relational aspects of communication exhibited different trends. CODEM proved to be a reliable, valid, and sensitive assessment tool for examining communication behavior in the field of dementia. CODEM also provides researchers a feasible examination tool for measuring effects of psychosocial intervention studies that strive to improve communication behavior and well-being in dementia.


2002 ◽  
Vol 18 (1) ◽  
pp. 52-62 ◽  
Author(s):  
Olga F. Voskuijl ◽  
Tjarda van Sliedregt

Summary: This paper presents a meta-analysis of published job analysis interrater reliability data in order to predict the expected levels of interrater reliability within specific combinations of moderators, such as rater source, experience of the rater, and type of job descriptive information. The overall mean interrater reliability of 91 reliability coefficients reported in the literature was .59. The results of experienced professionals (job analysts) showed the highest reliability coefficients (.76). The method of data collection (job contact versus job description) only affected the results of experienced job analysts. For this group higher interrater reliability coefficients were obtained for analyses based on job contact (.87) than for those based on job descriptions (.71). For other rater categories (e.g., students, organization members) neither the method of data collection nor training had a significant effect on the interrater reliability. Analyses based on scales with defined levels resulted in significantly higher interrater reliability coefficients than analyses based on scales with undefined levels. Behavior and job worth dimensions were rated more reliable (.62 and .60, respectively) than attributes and tasks (.49 and .29, respectively). Furthermore, the results indicated that if nonprofessional raters are used (e.g., incumbents or students), at least two to four raters are required to obtain a reliability coefficient of .80. These findings have implications for research and practice.


2012 ◽  
Vol 28 (4) ◽  
pp. 262-269 ◽  
Author(s):  
Matthias Johannes Müller ◽  
Suzan Kamcili-Kubach ◽  
Songül Strassheim ◽  
Eckhardt Koch

A 10-item instrument for the assessment of probable migration-related stressors was developed based on previous work (MIGSTR10) and interrater reliability was tested in a chart review study. The MIGSTR10 and nine nonspecific stressors of the DSM-IV Axis IV (DSMSTR9) were put into a questionnaire format with categorical and dimensional response options. Charts of 100 inpatients (50 Turkish migrants [MIG], 50 native German patients [CON]) with affective or anxiety disorder were reviewed by three independent raters and MIGSTR10, DSMSTR9, and Global Assessment of Functioning scale (GAF) scores were obtained. Interrater reliability indices (ICC) of items and sum scores were calculated. The prevalence of single migration-related stressors in MIG ranged from 15% to 100% (CON 0–92%). All items of the MIGSTR10 (ICC 0.58–0.92) and the DSMSTR9 (ICC 0.56–0.96) reached high to very high interrater agreement (p < .0005). The item analysis of the MIGSTR10 revealed sufficient internal consistency (Cronbach’s α = 0.68/0.69) and only one item (“family conflicts”) without substantial correlation with the remaining scale. Correlation analyses showed a significant overlap of dimensional MIGSTR10 scores (r² = 0.25; p < .01) and DSMSTR9 scores (r² = 9%; p < .05) with GAF scores in MIG indicating functional relevance. MIGSTR10 is considered a feasible, economic, and reliable instrument for the assessment of stressors potentially related to migration.


2008 ◽  
Author(s):  
Joanna L. Goplen ◽  
E. Ashby Plant ◽  
Joyce Ehrlinger ◽  
Jonathan W. Kunstman ◽  
Corey J. Columb ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document