scholarly journals Development of the English Listening and Reading Computerized Revised Token Test Into Cantonese: Validity, Reliability, and Sensitivity/Specificity in People With Aphasia and Healthy Controls

2020 ◽  
Vol 63 (11) ◽  
pp. 3743-3759
Author(s):  
Mehdi Bakhtiar ◽  
Min Ney Wong ◽  
Emily Ka Yin Tsui ◽  
Malcolm R. McNeil

Purpose This study reports the psychometric development of the Cantonese versions of the English Computerized Revised Token Test (CRTT) for persons with aphasia (PWAs) and healthy controls (HCs). Method The English CRTT was translated into standard Chinese for the Reading–Word Fade version (CRTT-R- WF -Cantonese) and into formal Cantonese for the Listening version (CRTT-L-Cantonese). Thirty-two adult native Cantonese PWAs and 42 HCs were tested on both versions of CRTT-Cantonese tests and on the Cantonese Aphasia Battery to measure the construct and concurrent validity of CRTT-Cantonese tests. The HCs were retested on both versions of the CRTT-Cantonese tests, whereas the PWAs were randomly assigned for retesting on either version to measure the test–retest reliability. Results A two-way, Group × Modality, repeated-measures analysis of variance revealed significantly lower scores for the PWA group than the HC group for both reading and listening. Other comparisons were not significant. A high and significant correlation was found between the CRTT-R- WF -Cantonese and the CRTT-L-Cantonese in PWAs, and 87% of the PWAs showed nonsignificantly different performance across the CRTT-Cantonese tests based on the Revised Standardized Difference Test. The CRTT-R- WF -Cantonese provided better aphasia diagnostic sensitivity (100%) and specificity (83.30%) values than the CRTT-L-Cantonese. Pearson correlation coefficients revealed significant moderate correlations between the Cantonese Aphasia Battery scores and the CRTT-Cantonese tests in PWAs, supporting adequate concurrent validity. Intraclass correlation coefficient showed high test–retest reliability (between .82 and .96, p < .001) for both CRTT-Cantonese tests for both groups. Conclusions Results support that the validly translated CRTT-R- WF -Cantonese and CRTT-L-Cantonese tests significantly differentiate the reading and listening comprehension of PWAs from HCs and provides acceptable concurrent validity and high test–retest reliability for both tests. Furthermore, favorable PWA versus HC sensitivity and specificity cutoff scores are presented for both CRTT-Cantonese listening and reading tests.

2021 ◽  
Vol 12 ◽  
Author(s):  
Wei Xia ◽  
William Ho Cheung Li ◽  
Tingna Liang ◽  
Yuanhui Luo ◽  
Laurie Long Kwan Ho ◽  
...  

Objectives: This study conducted a linguistic and psychometric evaluation of the Chinese Counseling Competencies Scale-Revised (CCS-R).Methods: The Chinese CCS-R was created from the original English version using a standard forward-backward translation process. The psychometric properties of the Chinese CCS-R were examined in a cohort of 208 counselors-in-training by two independent raters. Fifty-three counselors-in-training were asked to undergo another counseling performance evaluation for the test-retest. The confirmatory factor analysis (CFA) was conducted for the Chinese CCS-R, followed by internal consistency, test-retest reliability, inter-rater reliability, convergent validity, and concurrent validity.Results: The results of the CFA supported the factorial validity of the Chinese CCS-R, with adequate construct replicability. The scale had a McDonald's omega of 0.876, and intraclass correlation coefficients of 0.63 and 0.90 for test-retest reliability and inter-rater reliability, respectively. Significantly positive correlations were observed between the Chinese CCS-R score and scores of performance checklist (Pearson's γ = 0.781), indicating a large convergent validity, and knowledge on drug abuse (Pearson's γ = 0.833), indicating a moderate concurrent validity.Conclusion: The results support that the Chinese CCS-R is a valid and reliable measure of the counseling competencies.Practice implication: The CCS-R provides trainers with a reliable tool to evaluate counseling students' competencies and to facilitate discussions with trainees about their areas for growth.


2002 ◽  
Vol 82 (4) ◽  
pp. 364-371 ◽  
Author(s):  
Douglas P Gross ◽  
Michele C Battié

Abstract Background and Purpose. Functional capacity evaluations (FCEs) are measurement tools used in predicting readiness to return to work following injury. The interrater and test-retest reliability of determinations of maximal safe lifting during kinesiophysical FCEs were examined in a sample of people who were off work and receiving workers' compensation. Subjects. Twenty-eight subjects with low back pain who had plateaued with treatment were enrolled. Five occupational therapists, trained and experienced in kinesiophysical methods, conducted testing. Methods. A repeated-measures design was used, with raters testing subjects simultaneously, yet independently. Subjects were rated on 2 occasions, separated by 2 to 4 days. Analyses included intraclass correlation coefficients (ICCs) and 95% confidence intervals. Results. The ICC values for interrater reliability ranged from .95 to .98. Test-retest values ranged from .78 to .94. Discussion and Conclusion. Inconsistencies in subjects' performance across sessions were the greatest source of FCE measurement variability. Overall, however, test-retest reliability was good and interrater reliability was excellent.


2014 ◽  
Vol 40 (2) ◽  
pp. 200-205 ◽  
Author(s):  
J. K. Kim ◽  
H. M. Lim

The purpose of this study was to translate and culturally adapt the Carpal Tunnel Questionnaire to produce an equivalent Korean version. A total of 53 patients completed the Korean version of the Carpal Tunnel Questionnaire pre-operatively and 3 months after open carpal tunnel release. All 53 also completed the Korean version of the Disabilities of Arm, Shoulder, and Hand questionnaire pre-operatively and 3 months post-operatively. Reliability was measured by determining the test–retest reliability and internal consistency. Test–retest reliability was assessed using intraclass correlation coefficients and paired t-tests, and internal consistency using Cronbach’s alpha coefficients. Pearson correlation analysis was carried out on the Korean version of the Carpal Tunnel Questionnaire scores and the Korean version of the Disabilities of Arm, Shoulder, and Hand scores to assess construct validity. Responsiveness was evaluated using effect sizes and standardized response means. The reliability of the Korean version of the Carpal Tunnel Questionnaire was good. The scores in the Korean version of the Disabilities of Arm, Shoulder, and Hand strongly correlated with the scores in the Korean version of the Carpal Tunnel Questionnaire. Standardized response mean and effect size were both large for the Korean version of the Carpal Tunnel Questionnaire. The study shows that the Korean version of the Carpal Tunnel Questionnaire is a reliable, valid and responsive instrument for measuring outcomes in carpal tunnel syndrome.


2021 ◽  
pp. 003151252110497
Author(s):  
Marco Tofani ◽  
Giulia Blasetti ◽  
Luca Lucibello ◽  
Anna Berardi ◽  
Giovanni Galeoto ◽  
...  

Limitations in hand function are common among children with cerebral palsy (CP), with almost 50% presenting an arm–hand dysfunction. However, there is no standardized assessment tool available in Italian for evaluating bimanual performance for this population. Our objective in this study was to evaluate the psychometric properties of an Italian translation of the ABILHAND-Kids (ABILHAND-Kids-IT) among children with CP. We examined internal consistency using Cronbach’s Alpha and Omega coefficients, and we investigated test-retest reliability with intraclass correlation coefficients (ICC). We performed explorative factor analysis (EFA) to investigate structural validity. We calculated Pearson’s correlation coefficients between the ABILAND-Kids IT and the Manual Ability Classification System (MACS) to assess criterion validity; and, to demonstrate the score variability of the ABILHAND-Kids-IT, we used analyses of variance (ANOVAs) to compare the 181 children with CP in this sample with their levels on the MACS. We enrolled 181 children with CP in the study. EFA confirmed a uni-dimensional scale. We obtained internal consistency on both Cronbach’s Alpha and Omega coefficient of 0.98, and a one-week test-retest reliability analysis revealed an ICC with 95% of confidence interval of .992. The ANOVA revealed significant score variability ( p < 0.01) and the Pearson correlation coefficient comparing the ABILHAND-Kids-It score with the MACS was –0.929 ( p < 0.01). We conclude that the ABILHAND-Kids-IT is valid and reliable for use with Italian children with CP.


2017 ◽  
Vol 04 (01) ◽  
pp. e1-e6 ◽  
Author(s):  
Mark Burghart ◽  
Jordan Craig ◽  
Jeff Radel ◽  
Jessie Huisinga

Background Balance assessment is necessary when evaluating athletes after a concussion. We investigated a mobile device application (app) for providing valid, reliable, and objective measures of static balance. Objectives The mobile device app would demonstrate similar test–retest reliability to force platform center of pressure (COP) sway variables and that SWAY scores and force platform COP sway variables would demonstrate good correlation coefficients. Methods Twenty-six healthy adults performed balance stances on a force platform while holding a mobile device equipped with SWAY (Sway Medical LLC) to measure postural sway based on acceleration changes detected by the mobile device's accelerometer. Participants completed four series of three 10-second stances (feet together, tandem, and single leg), twice with eyes open and twice with eyes closed. Test–retest reliability was assessed using intraclass correlation coefficients (ICC). Concurrent validity of SWAY scores and COP sway variables were determined with Pearson correlation coefficients. Results Reliability of SWAY scores was comparable to force platform results for the same test condition (ICC = 0.21–0.57). Validity showed moderate associations between SWAY scores and COP sway variables during tandem stance (r = –0.430 to –0.493). Lower SWAY scores, indicating instability, were associated with greater COP sway. Discussion The SWAY app is a valid and reliable tool when measuring balance of healthy individuals in tandem stance. Further study of clinical populations is needed prior to assessment use. Conclusion The SWAY app has potential for objective clinical and sideline evaluations of concussed athletes, although continued evaluation is needed.


Neurology ◽  
2018 ◽  
Vol 91 (23 Supplement 1) ◽  
pp. S4.2-S4
Author(s):  
Tamara McLeod ◽  
R. Curtis Bay ◽  
Hannah Gray ◽  
Richelle Marie Williams

ObjectiveThe purpose of this study was to evaluate test-retest reliability and practice effects of the Dynavision D2 in active adolescents.BackgroundFollowing sport-related concussion, assessment of oculomotor function and vision is important. While clinical tests are recommended, computerized devices, such as the Dynavision D2, are emerging as viable tools for vision assessment. As with all concussion assessments, understanding test-retest reliability and susceptibility to practice effects is important for appropriate interpretation of serial assessments post-injury.Design/methodsParticipants included 20 female adolescents (age = 16.6 ± 1.10 years, mass = 62.0 ± 5.9 kg, height = 169.2 ± 5.1 cm). Participants completed 2 test sessions 1 week apart using the Dynavision D2. The Dynavision D2 includes a one-minute drill task where a single light illuminates, and participants hit the light as quickly as possible, completing 3 drills per trial. Participants completed 3 trials during the first session and 2 during the second. Independent variables were day (day 1, day 2) and drills (15 drills). Dependent variables were the number of hits per minute (Hits/min) and average reaction time (AvgRT). Within-day and between-day test-retest reliabilities were analyzed using two-way random effects intraclass correlation coefficients for consistency. Practice effects were analyzed with repeated measures analysis of variance and Helmert contrasts (p = 0=.05).ResultsModerate-to-strong reliability was demonstrated for Hits/min (within-day 1 [ICC = 0.74; 95% CI: 0.53, 0.87]; within-day 2 [ICC = 0.91; 95% CI. 77.97]; between-days [ICC = 0.86; 95% CI. 65.95]). Moderate-to-strong reliability was demonstrated for AvgRT (within-day 1 [ICC = 0.70, 95% CI. 48.86], within-day 2 [ICC = 0.92; 95% CI. 78.97]; between-days [ICC = 0.85; 95% CI: 0.64.94]). Practice effects were noted for Hits/Min (p = 0.001) and AvgRT (p < 0.001). Helmert contrasts suggested that the practice effect plateaued at drill 11 for Hits/min and drill 12 for AvgRT.ConclusionsModerate-to-excellent test-retest reliability was found for the one-minute task drill with better reliability noted on day 2 and between days, compared to day 1. This task is susceptible to practice effects, highlighting the need for familiarization or practice trials prior to documenting patient scores.


2017 ◽  
Vol 6 ◽  
Author(s):  
Karen L. Rispin ◽  
Kara Huff ◽  
Joy Wee

Background: The Aspects of Wheelchair Mobility Test (AWMT) was developed for use in a repeated measures format to provide comparative effectiveness data on mobility facilitated by different wheelchair types. It has been used in preliminary studies to compare the mobility of wheelchairs designed for low-resource areas and is intended to be simple and flexible enough so as to be used in low-technology settings. However, to reliably compare the impact of different types of wheelchairs on the mobility of users, a measure must first be a reliable and valid measure of mobility.Methods: This study investigated the test–retest reliability and concurrent validity for the AWMT 2.0 as a measure of mobility. For reliability testing, participants in a low-resource setting completed the tests twice in their own wheelchairs at least one week apart. For concurrent validity, participants also completed the Wheelchair Skills Test Questionnaire (WST-Q), a related but not identical validated assessment tool.Results: Concurrent validity was indicated by a significant positive correlation with an r value of 0.7 between the WST-Q capacity score and the AWMT 2.0 score. Test–retest reliability was confirmed by an intraclass correlation coefficient greater than 0.7 between the two trials.Conclusion: Results support the preliminary reliability and validity of the AWMT 2.0, supporting its effectiveness in comparing the mobility provided by different wheelchair types. This information can be used to enable effective use of limited funds for wheelchair selection at individual and organisational scales.


2000 ◽  
Vol 9 (2) ◽  
pp. 117-123 ◽  
Author(s):  
Michael D. Ross ◽  
Elizabeth G. Fontenot

Context:The standing heel-rise test has been recommended as a means of assessing calf-muscle performance. To the authors' knowledge, the reliability of the test using intraclass correlation coefficients (ICCs) has not been reported.Objective:To determine the test-retest reliability of the standing heel-rise test.Design:Single-group repeated measures.Participants:Seventeen healthy subjects.Settings and Infevention:Each subject was asked to perform as many standing heel raises as possible during 2 testing sessions separated by 7 days.Main Outcome Measures:Reliability data for the standing heel-rise test were studied through a repeated-measures analysis of variance, ICC2, 1 and SEMs.Results:The ICC2,1 and SEM values for the standing heel-rise test were .96 and 2.07 repetitions, respectively.Conclusions:The standing heel-rise test offers clinicians a reliable assessment of calfmuscle performance. Further study is necessary to determine the ability of the standing heel-rise test to detect functional deficiencies in patients recovering from lower leg injury or surgery


2021 ◽  
pp. 135245852110181
Author(s):  
KH Lam ◽  
P van Oirschot ◽  
B den Teuling ◽  
HE Hulst ◽  
BA de Jong ◽  
...  

Background: Early detection and monitoring of cognitive dysfunction in multiple sclerosis (MS) may be enabled with smartphone-adapted tests that allow frequent measurements in the everyday environment. Objectives: The aim of this study was to determine the reliability, construct and concurrent validity of a smartphone-adapted Symbol Digit Modalities Test (sSDMT). Methods: During a 28-day follow-up, 102 patients with MS and 24 healthy controls (HC) used the MS sherpa® app to perform the sSDMT every 3 days on their own smartphone. Patients performed the Brief International Cognitive Assessment for MS at baseline. Test–retest reliability (intraclass correlation coefficients, ICC), construct validity (group analyses between cognitively impaired (CI), cognitively preserved (CP) and HC for differences) and concurrent validity (correlation coefficients) were assessed. Results: Patients with MS and HC completed an average of 23.2 ( SD = 10.0) and 18.3 ( SD = 10.2) sSDMT, respectively. sSDMT demonstrated high test–retest reliability (ICCs > 0.8) with a smallest detectable change of 7 points. sSDMT scores were different between CI patients, CP patients and HC (all ps < 0.05). sSDMT correlated modestly with the clinical SDMT (highest r = 0.690), verbal (highest r = 0.516) and visuospatial memory (highest r = 0.599). Conclusion: Self-administered smartphone-adapted SDMT scores were reliable and different between patients who were CI, CP and HC and demonstrated concurrent validity in assessing information processing speed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yanzhi Bi ◽  
Xin Hou ◽  
Jiahui Zhong ◽  
Li Hu

AbstractPain perception is a subjective experience and highly variable across time. Brain responses evoked by nociceptive stimuli are highly associated with pain perception and also showed considerable variability. To date, the test–retest reliability of laser-evoked pain perception and its associated brain responses across sessions remain unclear. Here, an experiment with a within-subject repeated-measures design was performed in 22 healthy volunteers. Radiant-heat laser stimuli were delivered on subjects’ left-hand dorsum in two sessions separated by 1–5 days. We observed that laser-evoked pain perception was significantly declined across sessions, coupled with decreased brain responses in the bilateral primary somatosensory cortex (S1), right primary motor cortex, supplementary motor area, and middle cingulate cortex. Intraclass correlation coefficients between the two sessions showed “fair” to “moderate” test–retest reliability for pain perception and brain responses. Additionally, we observed lower resting-state brain activity in the right S1 and lower resting-state functional connectivity between right S1 and dorsolateral prefrontal cortex in the second session than the first session. Altogether, being possibly influenced by changes of baseline mental state, laser-evoked pain perception and brain responses showed considerable across-session variability. This phenomenon should be considered when designing experiments for laboratory studies and evaluating pain abnormalities in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document