AN ASSESSMENT OF THE VALIDITY AND RELIABILITY OF TWO PERCEIVED EXERTION RATING SCALES AMONG HONG KONG CHILDREN

2002 ◽  
Vol 95 (7) ◽  
pp. 1047 ◽  
Author(s):  
MEE-LEE LEUNG
2002 ◽  
Vol 95 (3_suppl) ◽  
pp. 1047-1062 ◽  
Author(s):  
Mee-Lee Leung ◽  
Pak-Kwong Chung ◽  
Raymond W. Leung

This study evaluated the validity and reliability of the Chinese-translated (Cantonese) versions of the Borg 6–20 Rating of Perceived Exertion (RPE) scale and the Children's Effort Rating Table (CERT) during continuous incremental cycle ergometry with 10- to 11-yr.-old Hong Kong school children. A total of 69 children were randomly assigned, with the restriction of groups being approximately equal, to two groups using the two scales, CERT ( n = 35) and RPE ( n = 34). Both groups performed two trials of identical incremental continuous cycling exercise (Trials 1 and 2) 1 wk. apart for the reliability test. Objective measures of exercise intensity (heart rate, absolute power output, and relative oxygen consumption) and the two subjective measures of effort were obtained during the exercise. For both groups, significant Pearson correlations were found for perceived effort ratings correlated with heart rate ( rs ≥ .69), power output ( rs ≥ .75), and oxygen consumption ( rs ≥ .69). In addition, correlations for CERT were consistently higher than those for RPE. High test-retest intraclass correlations were found for both the effort ( R = .96) and perceived exertion ( R = 89) groups, indicating that the scales were reliable. In conclusion, the CERT and RPE scales, when translated into Cantonese, are valid and reliable measures of exercise intensity during controlled exercise by children. The Effort rating may be better than the Perceived Exertion scale as a measure of perceived exertion that can be more validly and reliably used with Hong Kong children.


1984 ◽  
Vol 21 (3) ◽  
pp. 325-331 ◽  
Author(s):  
Bruno Neibecker

A computer-controlled facility is tested which operationalizes magnitude scaling (psychophysics) directly over a CRT screen. The author reports experimental findings comparing magnitude scaling with rating scales as attitude measures of advertisements and erotic pictures. Also, validity and reliability are examined by means of the structural equation approach. On the basis of the level of reliability and the degree of convergent/discriminant validity, magnitude scaling appears to be a valid and reliable alternative to rating scales.


2021 ◽  
pp. JNM-D-21-00022
Author(s):  
Hui Lin Cheng ◽  
Man Chung Li ◽  
Doris Yin Ping Leung

Background and PurposeFear of cancer recurrence (FCR) is a frequent psychological adverse effect among cancer survivors. This study aimed to test the psychometric properties of the Traditional Chinese version of the 12-item Fear of Progression Questionnaire-Short Form (FoP-Q-SF).MethodsAn online survey was conducted with 311 cancer survivors in Hong Kong. The factor structure, known-group validity, and internal consistency reliability were examined.ResultsThe values measuring validity is good, with acceptable goodness-of-fit indexes (RMSEA = 0.073, SRMR = 0.042, CFI = 0.954), moderate to large correlations with unmet needs (0.339.0.816), being female, younger, had completed treatment ≤ 2 years, and had undergone chemotherapy/radiotherapy scored significantly higher on the FoP-Q-SF. The Cronbach’s alpha of the scale was .922.ConclusionsHigh validity and reliability indicate the scale’s value in assessing FCR in Hong Kong cancer survivors.


2020 ◽  
pp. 073428292097071
Author(s):  
Michal Jabůrek ◽  
Adam Ťápal ◽  
Šárka Portešová ◽  
Steven I. Pfeiffer

The factor structure, the concurrent validity, and test–retest reliability of the Czech translation of the Gifted Rating Scales-School Form [GRS-S; Pfeiffer, S. I., & Jarosewich, T. (2003). GRS (gifted rating scales) - manual. Pearson] were evaluated. Ten alternative models were tested. Four models were found to exhibit acceptable fit and interpretability. The factor structure was comparable for both parent ( n = 277) and teacher raters ( n = 137). High correlations between the factors suggest that raters might be subject to a halo effect. Ratings made by teachers show a closer relationship with criteria (WJ IE II COG, CFT 20-R, and TIM3–5) than ratings made by parents. Test–retest reliability of teacher rating (with median 93 days) was quite high for all GRS-S subscales ( r = .84–.87).


2020 ◽  
Vol 24 (2) ◽  
pp. 103-114 ◽  
Author(s):  
Luana L. Cabral ◽  
Fábio Y. Nakamura ◽  
Joice M. F. Stefanello ◽  
Luiz C. V. Pessoa ◽  
Bruno P. C. Smirmaul ◽  
...  

2020 ◽  
pp. 102490792097163
Author(s):  
Kai Yeung Cheung ◽  
Ling Pong Leung

Background: Older people (⩾65 years) present a unique challenge in emergency department triage. Hong Kong’s Hospital Authority adopts a five-level emergency department triage system, with no special considerations for older people. We evaluated the validity and reliability of this triage scale in older people in a regional Hong Kong emergency department. Methods: In total, 295 cases stratified by triage category were randomly selected for review from November 2016 to January 2017. Validity was established by comparing the real emergency department patients’ triage category against (1) that of an expert panel and (2) the need for life-saving intervention. Triage notes were extracted to make case scenarios to evaluate inter- and intra-rater reliabilities. Emergency department nurses (n = 8) were randomly selected and grouped into <5 and ⩾5 years emergency department experience. All nurses independently rated all 295 scenarios, blinded to clinical outcomes. Results: The percentage agreement between the real emergency department patients’ triage category and the expert panel’s assignment was 68.5%, with 16.3% and 15.3% over-triage and under-triage, respectively. Quadratic weighting kappa for agreement with the expert panel was 0.72 (95% confidence interval: 0.53–0.91). The sensitivity, specificity and positive likelihood ratio for the need for life-saving interventions were 75.0% (95% confidence interval: 47.6%–92.7%), 97.1% (95% confidence interval: 94.4%–98.8%) and 26.2 (95% confidence interval: 12.5%–54.8%), respectively. The Fleiss kappa value for inter-rater reliability was 0.50 (95% confidence interval: 0.47–0.54) for junior and senior nurse groups, respectively. Conclusion: The current triage scale demonstrates reasonable validity and reliability for use in our older people. Considerations highlighting the unique characteristics of older people emergency department presentations are recommended.


2021 ◽  
Vol 50 (Supplement_1) ◽  
Author(s):  
Jessica Stanhope ◽  
Philip Weinstein

Abstract Background Pain is often measured by asking people to rate their pain intensity at its worst, on average, and at its least, for the last 7 days using numeric rating scales. The three ratings are summed to produce a composite measure. The validity and reliability of this composite measure has not been examined using modern psychometric methods in any population. We examined the validity and reliability of this pain intensity measure for use with professional musicians, university music and science students, and university staff, all of whom had reported experiencing musculoskeletal symptoms in the last 7 days. Methods Data were collected using a questionnaire survey. The validity and reliability of the composite pain measure were examined using Rasch analysis. Differential item functioning was examined for age, gender, student status, musician status, and socioeconomic status. Results While the data fit one of the Rasch models, after several response categories were collapsed, differential item functioning was present. There was no solution found that fit one of the Rasch models, without differential item functioning. Conclusions Despite the recommendation for the three number ratings scales for pain to be combined, using Rasch analysis showed that this was not a valid approach for our study population. Our findings highlight the importance of using Rasch analysis to examine the utility of measures. Key messages Rasch analysis is a useful method for investigating the validity and reliability of scales. Combining pain ratings cannot be assumed to produce a valid and reliable measure.


Sign in / Sign up

Export Citation Format

Share Document