Comparative study of psychometric properties of three assessment tools for degenerative rotator cuff disease

2018 ◽  
Vol 33 (2) ◽  
pp. 277-284 ◽  
Author(s):  
Etienne James-Belin ◽  
Anne Laure Roy ◽  
Sandra Lasbleiz ◽  
Agnès Ostertag ◽  
Alain Yelnik ◽  
...  

Objective: To compare psychometric properties of Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire, Shoulder Pain and Disability Index (SPADI) and Constant–Murley scale, in patients with degenerative rotator cuff disease (DRCD). Design: Longitudinal cohort. Setting: One French university hospital. Methods: The scales were applied twice at one-week interval before physiotherapy and once after physiotherapy two months later. The perceived improvement after treatment was self-assessed on a numerical scale (0–4). The test–retest reliability of the DASH, SPADI and Constant–Murley scales was assessed before treatment by the intraclass correlation coefficient (ICC). The responsiveness was assessed by the paired t-test ( P < 0.05) and standardized mean difference (SMD). The correlation between the percentage of variation in scale scores and the self-assessed improvement score after treatment was measured by the Spearman coefficient. Results: Fifty-three patients were included. Twenty-six only were available for reliability. The test–retest reliability was very good for the DASH (ICC = 0.97), SPADI (0.95) and Constant–Murley (0.92). The scale score was improved after treatment for each scale ( P < 0.05). The SMD was moderate for the DASH (0.56) and SPADI (0.56) scales, and small for the Constant–Murley (0.44). The correlation between the percentage of variation in scores and self-assessed improvement score after treatment was high, moderate and not significant for the SPADI (0.59, P < 0.0001), DASH (0.42, P < 0.01) and Constant–Murley scales, respectively. Conclusion: The test–retest reliability of the DASH, SPADI and Constant–Murley scales is very good for patients with DRCD. The highest responsiveness was achieved with the SPADI.

2016 ◽  
Vol 23 (3) ◽  
pp. 239 ◽  
Author(s):  
D. Rodin ◽  
B. Banihashemi ◽  
L. Wang ◽  
A. Lau ◽  
S. Harris ◽  
...  

Purpose We evaluated the feasibility, reliability, and validity of the Brain Metastases Symptom Checklist (BMSC), a novel self-report measure of common symptoms experienced by patients with brain metastases.Methods Patients with first-presentation symptomatic brain metastases (n = 137) referred for whole-brain radiotherapy (WBRT) completed the BMSC at time points before and after treatment. Their caregivers (n = 48) provided proxy ratings twice on the day of consultation to assess reliability, and at week 4 after WBRT to assess responsiveness to change. Correlations with 4 other validated assessment tools were evaluated.Results The symptoms reported on the BMSC were largely mild to moderate, with tiredness (71%) and difficulties with balance (61%) reported most commonly at baseline. Test–retest reliability for individual symptoms had a median intraclass correlation of 0.59 (range: 0.23–0.85). Caregiver proxy and patient responses had a median intraclass correlation of 0.52. Correlation of absolute scores on the BMSC and other symptom assessment tools was low, but consistency in the direction of symptom change was observed. At week 4, change in symptoms was variable, with improvements in weight gain and sleep of 42% and 41% respectively, and worsening of tiredness and drowsiness of 62% and 59% respectively.Conclusions The BMSC captures a wide range of symptoms experienced by patients with brain metastases, and it is sensitive to change. It demonstrated adequate test–retest reliability and face validity in terms of its responsiveness to change. Future research is needed to determine whether modifications to the BMSC itself or correlation with more symptom-specific measures will enhance validity. 


BMJ Open ◽  
2018 ◽  
Vol 8 (10) ◽  
pp. e021734 ◽  
Author(s):  
Alison Griffiths ◽  
Rachel Toovey ◽  
Prue E Morgan ◽  
Alicia J Spittle

ObjectiveGross motor assessment tools have a critical role in identifying, diagnosing and evaluating motor difficulties in childhood. The objective of this review was to systematically evaluate the psychometric properties and clinical utility of gross motor assessment tools for children aged 2–12 years.MethodA systematic search of MEDLINE, Embase, CINAHL and AMED was performed between May and July 2017. Methodological quality was assessed with the COnsensus-based Standards for the selection of health status Measurement INstruments checklist and an outcome measures rating form was used to evaluate reliability, validity and clinical utility of assessment tools.ResultsSeven assessment tools from 37 studies/manuals met the inclusion criteria: Bayley Scale of Infant and Toddler Development-III (Bayley-III), Bruininks-Oseretsky Test of Motor Proficiency-2 (BOT-2), Movement Assessment Battery for Children-2 (MABC-2), McCarron Assessment of Neuromuscular Development (MAND), Neurological Sensory Motor Developmental Assessment (NSMDA), Peabody Developmental Motor Scales-2 (PDMS-2) and Test of Gross Motor Development-2 (TGMD-2). Methodological quality varied from poor to excellent. Validity and internal consistency varied from fair to excellent (α=0.5–0.99). The Bayley-III, NSMDA and MABC-2 have evidence of predictive validity. Test–retest reliability is excellent in the BOT-2 (intraclass correlation coefficient (ICC)=0.80–0.99), PDMS-2 (ICC=0.97), MABC-2 (ICC=0.83–0.96) and TGMD-2 (ICC=0.81–0.92). TGMD-2 has the highest inter-rater (ICC=0.88–0.93) and intrarater reliability (ICC=0.92–0.99).ConclusionsThe majority of gross motor assessments for children have good-excellent validity. Test–retest reliability is highest in the BOT-2, MABC-2, PDMS-2 and TGMD-2. The Bayley-III has the best predictive validity at 2 years of age for later motor outcome. None of the assessment tools demonstrate good evaluative validity. Further research on evaluative gross motor assessment tools are urgently needed.


2020 ◽  
Author(s):  
Julia Velten ◽  
Gerrit Hirschfeld ◽  
Milena Meyers ◽  
Jürgen Margraf

Background: The Sexual Interest and Desire Inventory Female (SIDI-F) is a clinician-administered scale that allows for a comprehensive assessment of symptoms related to Hypoactive Sexual Desire Dysfunction (HSDD). As self-report questionnaires may facilitate less socially desirable responding and as time and resources are scarce in many clinical and research settings, a self-report version was developed (SIDI-F-SR). Aim: To investigate the agreement between the SIDI-F and a self-report version (SIDI-F-SR) and assess psychometric properties of the SIDI-F-SR. Methods: A total of 170 women (Mage=36.61, SD=10.61, range=20-69) with HSDD provided data on the SIDI-F, administered by a clinical psychologist via telephone, and the SIDI-F-SR, delivered as an Internet-based questionnaire. A subset of 19 women answered the SIDI-F-SR twice over a period of 14 weeks. Outcomes: Intraclass correlation as well as predictors of absolute agreement between SIDI-F and SIDI-F-SR, as well as internal consistency, test-retest reliability, and criterion-related validity of the SIDI-F-SR were examined. Results: There was high agreement between SIDI-F and SIDI-F-SR (ICC=.86). On average, women scored about one point higher in the self-report vs. the clinician-administered scale. Agreement was higher in young women and those with severe symptoms. Internal consistency of the SIDI-F-SR was acceptable (α=.76) and comparable to the SIDI-F (α=.74). When corrections for the restriction of range were applied, internal consistency of the SIDI-F-SR increased to .91. Test-retest-reliability was good (r=.74). Criterion-related validity was low but comparable between SIDI-F and SIDI-F-SR.


BMJ Open ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. e037129
Author(s):  
Emma Säfström ◽  
Lena Nasstrom ◽  
Maria Liljeroos ◽  
Lena Nordgren ◽  
Kristofer Årestedt ◽  
...  

ObjectiveEven though continuity is essential after discharge, there is a lack of reliable questionnaires to measure and assess patients’ perceptions of continuity of care. The Patient Continuity of Care Questionnaire (PCCQ) addresses the period before and after discharge from hospital. However, previous studies show that the factor structure needs to be confirmed and validated in larger samples, and the aim of this study was to evaluate the psychometric properties of the PCCQ with focus on factor structure, internal consistency and stability.DesignA psychometric evaluation study. The questionnaire was translated into Swedish using a forward–backward technique and culturally adapted through cognitive interviews (n=12) and reviewed by researchers (n=8).SettingData were collected in four healthcare settings in two Swedish counties.ParticipantsA consecutive sampling procedure included 725 patients discharged after hospitalisation due to angina, acute myocardial infarction, heart failure or atrial fibrillation.MeasurementTo evaluate the factor structure, confirmatory factor analyses based on polychoric correlations were performed (n=721). Internal consistency was evaluated by ordinal alpha. Test–retest reliability (n=289) was assessed with intraclass correlation coefficient (ICC).ResultsThe original six-factor structure was overall confirmed, but minor refinements were required to reach satisfactory model fit. The standardised factor loadings ranged between 0.68 and 0.94, and ordinal alpha ranged between 0.82 and 0.95. All subscales demonstrated satisfactory test–retest reliability (ICC=0.76–0.94).ConclusionThe revised version of the PCCQ showed sound psychometric properties and is ready to be used to measure perceptions of continuity of care. High ordinal alpha in some subscales indicates that a shorter version of the questionnaire can be developed.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Stefanie Bühn ◽  
Peggy Ober ◽  
Tim Mathes ◽  
Uta Wegewitz ◽  
Anja Jacobs ◽  
...  

Abstract Background Systematic Reviews (SRs) can build the groundwork for evidence-based health care decision-making. A sound methodological quality of SRs is crucial. AMSTAR (A Measurement Tool to Assess Systematic Reviews) is a widely used tool developed to assess the methodological quality of SRs of randomized controlled trials (RCTs). Research shows that AMSTAR seems to be valid and reliable in terms of interrater reliability (IRR), but the test retest reliability (TRR) of AMSTAR has never been investigated. In our study we investigated the TRR of AMSTAR to evaluate the importance of its measurement and contribute to the discussion of the measurement properties of AMSTAR and other quality assessment tools. Methods Seven raters at three institutions independently assessed the methodological quality of SRs in the field of occupational health with AMSTAR. Between the first and second ratings was a timespan of approximately two years. Answers were dichotomized, and we calculated the TRR of all raters and AMSTAR items using Gwet’s AC1 coefficient. To investigate the impact of variation in the ratings over time, we obtained summary scores for each review. Results AMSTAR item 4 (Was the status of publication used as an inclusion criterion?) provided the lowest median TRR of 0.53 (moderate agreement). Perfect agreement of all reviewers was detected for AMSTAR-item 1 with a Gwet’s AC1 of 1, which represented perfect agreement. The median TRR of the single raters varied between 0.69 (substantial agreement) and 0.89 (almost perfect agreement). Variation of two or more points in yes-scored AMSTAR items was observed in 65% (73/112) of all assessments. Conclusions The high variation between the first and second AMSTAR ratings suggests that consideration of the TRR is important when evaluating the psychometric properties of AMSTAR.. However, more evidence is needed to investigate this neglected issue of measurement properties. Our results may initiate discussion of the importance of considering the TRR of assessment tools. A further examination of the TRR of AMSTAR, as well as other recently established rating tools such as AMSTAR 2 and ROBIS (Risk Of Bias In Systematic reviews), would be useful.


2018 ◽  
Author(s):  
Sarah Patrick ◽  
Peter Connick

AbstractBackgroundDepression affects approximately 25% of people with MS (pwMS) at any given time. It is however under recognised in clinical practice, in part due to a lack of uptake for brief assessment tools and uncertainty about their psychometric properties. The 9-item Patient Health Questionnaire (PHQ-9) is an attractive candidate for this role.ObjectiveTo synthesise published findings on the psychometric properties of the 9-item Patient Health Questionnaire (PHQ-9) when applied to people with multiple sclerosis (pwMS).Data sourcesPubMed, Medline and ISI Web of Science databases, supplemented by hand-searching of references from all eligible sources.Study eligibility criteriaPrimary literature written in English and published following peer-review with a primary aim to evaluate the performance of the PHQ-9 in pwMS.Outcome measuresPsychometric performance with respect to appropriateness, reliability, validity, responsiveness, precision, interpretability, acceptability, and feasibility.ResultsSeven relevant studies were identified, these were of high quality and included 5080 participants from all MS disease-course groups. Strong evidence was found supporting the validity of the PHQ-9 as a unidimensional measure of depression. Used as a screening tool for major depressive disorder (MDD) with a cut-point of 11, sensitivity was 95% sensitivity and specificity 88.3% (PPV 51.4%, NPV 48.6%). Alternative scoring systems that may address the issue of overlap between somatic features of depression and features of MS per se are being developed, although their utility remains unclear. However data on reliability was limited, and no specific evidence was available on test-retest reliability, responsiveness, acceptability, or feasibility.ConclusionsThe PHQ-9 represents a suitable tool to screen for MDD in pwMS. However use as a diagnostic tool cannot currently be recommended, and the potential value for monitoring depressive symptoms cannot be established without further evidence on test-retest reliability, responsiveness, acceptability, and feasibility.PROSPERO register ID: CRD42017067814


2012 ◽  
Vol 201 (1) ◽  
pp. 65-70 ◽  
Author(s):  
Helen Killaspy ◽  
Sarah White ◽  
Tatiana L. Taylor ◽  
Michael King

BackgroundThe Mental Health Recovery Star (MHRS) is a popular outcome measure rated collaboratively by staff and service users, but its psychometric properties are unknown.AimsTo assess the MHRS's acceptability, reliability and convergent validity.MethodA total of 172 services users and 120 staff from in-patient and community services participated. Interrater reliability of staff-only ratings and test–retest reliability of staff-only and collaborative ratings were assessed using intraclass correlation coefficients (ICCs). Convergent validity between MHRS ratings and standardised measures of social functioning and recovery was assessed using Pearson correlation. The influence of collaboration on ratings was assessed using descriptive statistics and ICCs.ResultsThe MHRS was relatively quick and easy to use and had good test–retest reliability, but interrater reliability was inadequate. Collaborative ratings were slightly higher than staff-only ratings. Convergent validity suggests it assesses social function more than recovery.ConclusionsThe MHRS cannot be recommended as a routine clinical outcome tool but may facilitate collaborative care planning.


Author(s):  
Negar Nikbakht ◽  
◽  
Mehdi Rezaee ◽  
Seyed Mehdi Tabatabaee ◽  
Gholam-Ali Shahidi ◽  
...  

Introduction: There is a need to have appropriate information about the ability of Parkinson's disease (PD) patients to perform cognitive instrumental activities of daily living (IADL). The purpose of the present study was to assess the psychometric properties of the Persian version of the Penn Parkinson's Daily Activities Questionnaire-15 (PDAQ-15). Methods: A total of 165 knowledgeable informants of PD patients completed the PDAQ-15. The Clinical Dementia Rating Scale, Hoehn and Yahr staging, Hospital Anxiety and Depression Scale (HADS) and Lawton IADL scale were included in the study. Internal consistency and test-retest reliability were evaluated by Cronbach's alpha coefficient and intraclass correlation coefficient (ICC), respectively. To examine the dimensionality of the questionnaire, exploratory factor analysis was used. The construct validity was assessed using Spearman rank correlation test. To assess the discriminative validity, PDAQ-15 scores were compared across cognitive stages. Results: The PDAQ-15 showed strong internal consistency (Cronbach's α = 0.99) and test-retest reliability (ICC= 0.99). Only one dimension identified for the PDAQ-15 in the factor analysis. There was strong correlation between PDAQ-15 with depression domain of HADS scale and Lawton IADL scale. (rs = |0.71–0.95|). The correlation of PDAQ-15 with anxiety domain of HADS scale was moderate (rs = 0.66). Discriminative validity analysis showed that the PDAQ-15 has significant power to discriminate between PD patients across cognitive stages. Conclusion: These results suggest that the PDAQ-15 is a valid and reliable PD-specific instrument that can be useful in clinical and research settings.


2015 ◽  
Vol 95 (9) ◽  
pp. 1274-1286 ◽  
Author(s):  
Deborah Antcliff ◽  
Malcolm Campbell ◽  
Steve Woby ◽  
Philip Keeley

Background Therapists frequently advise the use of activity pacing as a coping strategy to manage long-term conditions (eg, chronic low back pain, chronic widespread pain, chronic fatigue syndrome/myalgic encephalomyelitis). However, activity pacing has not been clearly operationalized, and there is a paucity of empirical evidence regarding pacing. This paucity of evidence may be partly due to the absence of a widely used pacing scale. To address the limitations of existing pacing scales, the 38-item Activity Pacing Questionnaire (APQ-38) was previously developed using the Delphi technique. Objective The aims of this study were: (1) to explore the psychometric properties of the APQ-38, (2) to identify underlying pacing themes, and (3) to assess the reliability and validity of the scale. Design This was a cross-sectional questionnaire study. Methods Three hundred eleven adult patients with chronic pain or fatigue participated, of whom 69 completed the test-retest analysis. Data obtained for the APQ-38 were analyzed using exploratory factor analysis, internal and test-retest reliability, and validity against 2 existing pacing subscales and validated measures of pain, fatigue, anxiety, depression, avoidance, and mental and physical function. Results Following factor analysis, 12 items were removed from the APQ-38, and 5 themes of pacing were identified in the resulting 26-item Activity Pacing Questionnaire (APQ-26): activity adjustment, activity consistency, activity progression, activity planning, and activity acceptance. These themes demonstrated satisfactory internal consistency (Cronbach α=.72–.92), test-retest reliability (intraclass correlation coefficient=.50–.78, P≤.001), and construct validity. Activity adjustment, activity progression, and activity acceptance correlated with worsened symptoms; activity consistency correlated with improved symptoms; and activity planning correlated with both improved and worsened symptoms. Limitations Data were collected from self-report questionnaires only. Conclusions Developed to be widely used across a heterogeneous group of patients with chronic pain or fatigue, the APQ-26 is multifaceted and demonstrates reliability and validity. Further study will explore the effects of pacing on patients' symptoms to guide therapists toward advising pacing themes with empirical benefits.


SAGE Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 215824402092287
Author(s):  
Bangyi Yan ◽  
Shiguang Ni ◽  
Xi Wang ◽  
Jin Liu ◽  
Qianjing Zhang ◽  
...  

The English version of the Independent Television Commission-Sense of Presence Inventory (ITC-SOPI), which was developed in 2001 to measure how involved or present participants are when experiencing different media, has substantial psychometric evidence. This study was used to translate and validate the ITC-SOPI in interactive virtual environments among the Chinese population. We used the forward-backward translation procedure. An expert panel reviewed the translated ITC-SOPI until the Chinese version of the ITC-SOPI was finalized. A total of 210 participants (133 males and 77 females), with a mean age of 23.05 years ( SD = 3.56, range = 17–47), completed the Chinese ITC-SOPI. The following psychometric properties were examined: factor structure, internal consistency, test–retest reliability, and convergent validity. Confirmatory factor analysis (CFA) showed a good fit (χ2 /df = 1.70, Tucker–Lewis Index [TLI] = 0.91, comparative fit index [CFI] = 0.92, root mean square error of approximation [RMSEA] = 0.058) of the four-factor model (spatial presence, engagement, ecological validity, and negative effects). For each factor, the Chinese ITC-SOPI had high internal consistency (Cronbach’s α ranging from 0.75 to 0.87) and test–retest reliability (intraclass correlation coefficient ranging from 0.82 to 0.91). Significant correlations were identified between all factors and the Interpersonal Reactivity Index-C (IRI-C) and the Generalized Anxiety Disorder-7 (GAD-7). The Chinese ITC-SOPI had good psychometric properties, suggesting that it is a reliable and valid tool for evaluating media users’ sense of presence in a Chinese-speaking context.


Sign in / Sign up

Export Citation Format

Share Document