scholarly journals Developing a Self-Report Tool to Measure Functional Limitation in Children Aged 7-12 Years with Physical Dysfunction

2021 ◽  
Vol 8 (8) ◽  
pp. 377-384
Author(s):  
Gita Jyoti Ojha ◽  
Ruchi Nagar Buckshee

Objective: To develop a self-report questionnaire to measure functional limitation in children aged 7- 12 years with physical dysfunction. Study design: Methodological research design Method: The study was conducted in phases: drafting of the questionnaire, content validation, pilot testing, revision of the questionnaire, field testing and test-retest reliability. A total of 66 items were generated through a review of the literature and interviews of twenty five children, their parents and health-care professionals. Qualitative and quantitative content validation through expert review and item reduction resulted in a 59-item questionnaire which was pilot tested on a sample of 10 children with physical dysfunction. With further inputs the questionnaire was revised. Thus, the final questionnaire with 60 items in two versions (a child and a caregiver’s version) in both Hindi and English was developed. Results: Qualitative review and Content validity was established for the Children’s Functional Limitation Scale. The questionnaire demonstrated high internal consistency (Cronbach’s alpha=0.91), moderate agreement between parents and children (weighted kappa= 0.718) and good test-retest reliability (weighted kappa=0.88). Conclusion: “Children’s Functional Limitation Scale” is a valid and reliable tool for documenting difficulties perceived by children with physical dysfunction. Also, the study demonstrates ability of children to reliably report their limitations. Keywords: Functional limitation, Activities of Daily living, Self-Report, Questionnaire, Children with physical dysfunctions

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Adam Polnay ◽  
Helen Walker ◽  
Christopher Gallacher

Purpose Relational dynamics between patients and staff in forensic settings can be complicated and demanding for both sides. Reflective practice groups (RPGs) bring clinicians together to reflect on these dynamics. To date, evaluation of RPGs has lacked quantitative focus and a suitable quantitative tool. Therefore, a self-report tool was designed. This paper aims to pilot The Relational Aspects of CarE (TRACE) scale with clinicians in a high-secure hospital and investigate its psychometric properties. Design/methodology/approach A multi-professional sample of 80 clinicians were recruited, completing TRACE and attitudes to personality disorder questionnaire (APDQ). Exploratory factor analysis (EFA) determined factor structure and internal consistency of TRACE. A subset was selected to measure test–retest reliability. TRACE was cross-validated against the APDQ. Findings EFA found five factors underlying the 20 TRACE items: “awareness of common responses,” “discussing and normalising feelings;” “utilising feelings,” “wish to care” and “awareness of complicated affects.” This factor structure is complex, but items clustered logically to key areas originally used to generate items. Internal consistency (α = 0.66, 95% confidence interval (CI) = 0.55–0.76) demonstrated borderline acceptability. TRACE demonstrated good test–retest reliability (intra-class correlation = 0.94, 95% CI = 0.78–0.98) and face validity. TRACE indicated a slight negative correlation with APDQ. A larger data set is needed to substantiate these preliminary findings. Practical implications Early indications suggested TRACE was valid and reliable, suitable to measure the effectiveness of reflective practice. Originality/value The TRACE was a distinctive measure that filled a methodological gap in the literature.


2019 ◽  
Vol 23 (4) ◽  
pp. 388-390 ◽  
Author(s):  
Aditi Senthilnathan ◽  
Sree S. Kolli ◽  
Leah A. Cardwell ◽  
Irma Richardson ◽  
Steven R. Feldman ◽  
...  

Background: Hidradenitis suppurativa (HS) is a debilitating dermatologic condition presenting with recurrent abscesses. While there are multiple scales to determine HS severity, none are designed for self-administration. A validated severity self-assessment tool may facilitate survey research and improve communication by allowing patients to objectively report their HS severity between clinic visits. Objectives: The purpose of this study was to assess a self-administered HS measure. Methods: An HS self-assessment tool (HSSA) with 10 photographs of different Hurley stages was developed. The tool was administered to patients diagnosed with HS who visited the Wake Forest Baptist Health dermatology clinic over a span of 2 months. Physician-administered Hurley stage was recorded to determine criterion validity. To assess test-retest reliability of the measure, patients completed the HSSA again at least 30 minutes after the first completion. Results: Twenty-four patients completed the measure, and 20 of these patients completed it twice. Agreement between physician-determined Hurley stage and self-determined Hurley stage was 66.7% with a weighted kappa of 0.57 (95% confidence interval [CI]: 0.30-0.84). The weighted kappa for agreement between patients’ initial and second completion of the HSSA was 0.81 (95% CI: 0.64-0.99). Conclusions: The self-administered measure provides moderate agreement with physician-determined Hurley stage and good test-retest reliability.


2020 ◽  
Author(s):  
Alexandra C Pike ◽  
Jade Serfaty ◽  
Oliver Joe Robinson

Catastrophising is a cognitive process that can be defined as predicting the worst possible outcome. It has been shown to be related to psychiatric diagnoses such as depression and anxiety, yet there are no self-report questionnaires specifically measuring it outside the context of pain research. Here, we therefore develop a novel, comprehensive self-report measure of general catastrophising. We performed five online studies (total n=734), in which we created and refined a Catastrophising Questionnaire, and used a factor analytic approach to understand its underlying structure. We also assessed convergent and discriminant validity, and analysed test-retest reliability. Furthermore, we tested the ability of Catastrophising Questionnaire scores to predict relevant clinical variables over and above other questionnaires. Finally, we also developed a four-item short version of this questionnaire. We found that our questionnaire is best fit by a single underlying factor, and shows convergent and discriminant validity. Exploratory factor analyses indicated that catastrophising is independent from other related constructs, including anxiety and worry. Moreover, we demonstrate incremental validity for this questionnaire in predicting diagnostic and medication status. Finally, we demonstrate that our Catastrophising Questionnaire has good test-retest reliability (Intra-Class-Correlation Coefficient=0.77, p<.001). Critically, we can now, for the first time, obtain detailed self-report data on catastrophising.


2021 ◽  
Vol 8 (1) ◽  
pp. 201362
Author(s):  
Alexandra C. Pike ◽  
Jade R. Serfaty ◽  
Oliver J. Robinson

Catastrophizing is a cognitive process that can be defined as predicting the worst possible outcome. It has been shown to be related to psychiatric diagnoses such as depression and anxiety, yet there are no self-report questionnaires specifically measuring it outside the context of pain research. Here, we therefore develop a novel, comprehensive self-report measure of general catastrophizing. We performed five online studies (total n = 734), in which we created and refined a Catastrophizing Questionnaire, and used a factor analytic approach to understand its underlying structure. We also assessed convergent and discriminant validity, and analysed test–retest reliability. Furthermore, we tested the ability of Catastrophizing Questionnaire scores to predict relevant clinical variables over and above other questionnaires. Finally, we also developed a four-item short version of this questionnaire. We found that our questionnaire is best fit by a single underlying factor, and shows convergent and discriminant validity. Exploratory factor analyses indicated that catastrophizing is independent from other related constructs, including anxiety and worry. Moreover, we demonstrate incremental validity for this questionnaire in predicting diagnostic and medication status. Finally, we demonstrate that our Catastrophizing Questionnaire has good test–retest reliability (intraclass correlation coefficient = 0.77, p < 0.001). Critically, we can now, for the first time, obtain detailed self-report data on catastrophizing.


2000 ◽  
Vol 16 (1) ◽  
pp. 53-58 ◽  
Author(s):  
Hans Ottosson ◽  
Martin Grann ◽  
Gunnar Kullgren

Summary: Short-term stability or test-retest reliability of self-reported personality traits is likely to be biased if the respondent is affected by a depressive or anxiety state. However, in some studies, DSM-oriented self-reported instruments have proved to be reasonably stable in the short term, regardless of co-occurring depressive or anxiety disorders. In the present study, we examined the short-term test-retest reliability of a new self-report questionnaire for personality disorder diagnosis (DIP-Q) on a clinical sample of 30 individuals, having either a depressive, an anxiety, or no axis-I disorder. Test-retest scorings from subjects with depressive disorders were mostly unstable, with a significant change in fulfilled criteria between entry and retest for three out of ten personality disorders: borderline, avoidant and obsessive-compulsive personality disorder. Scorings from subjects with anxiety disorders were unstable only for cluster C and dependent personality disorder items. In the absence of co-morbid depressive or anxiety disorders, mean dimensional scores of DIP-Q showed no significant differences between entry and retest. Overall, the effect from state on trait scorings was moderate, and it is concluded that test-retest reliability for DIP-Q is acceptable.


Author(s):  
Helmut Schröder ◽  
Isaac Subirana ◽  
Julia Wärnberg ◽  
María Medrano ◽  
Marcela González-Gross ◽  
...  

Abstract Background Validation of self-reported tools, such as physical activity (PA) questionnaires, is crucial. The aim of this study was to determine test-retest reliability, internal consistency, and the concurrent, construct, and predictive validity of the short semi-quantitative Physical Activity Unit 7 item Screener (PAU-7S), using accelerometry as the reference measurement. The effect of linear calibration on PAU-7S validity was tested. Methods A randomized sample of 321 healthy children aged 8–16 years (149 boys, 172 girls) from the nationwide representative PASOS study completed the PAU-7S before and after wearing an accelerometer for at least 7 consecutive days. Weight, height, and waist circumference were measured. Cronbach alpha was calculated for internal consistency. Test-retest reliability was determined by intra-class correlation (ICC). Concurrent validity was assessed by ICC and Spearman correlation coefficient between moderate to vigorous PA (MVPA) derived by the PAU-7S and by accelerometer. Concordance between both methods was analyzed by absolute agreement, weighted kappa, and Bland-Altman statistics. Multiple linear regression models were fitted for construct validity and predictive validity was determined by leave-one-out cross-validation. Results The PAU-7S overestimated MVPA by 18%, compared to accelerometers (106.5 ± 77.0 vs 95.2 ± 33.2 min/day, respectively). A Cronbach alpha of 0.76 showed an acceptable internal consistency of the PAU-7S. Test-retest reliability was good (ICC 0.71 p < 0.001). Spearman correlation and ICC coefficients of MVPA derived by the PAU-7S and accelerometers increased from 0.31 to 0.62 and 0.20 to 0.62, respectively, after calibration of the PAU-7S. Between-methods concordance improved from a weighted kappa of 0.24 to 0.50 after calibration. A slight reduction in ICC, from 0.62 to 0.60, yielded good predictive validity. Multiple linear regression models showed an inverse association of MVPA with standardized body mass index (β − 0.162; p < 0.077) and waist to height ratio (β − 0.010; p < 0.014). All validity dimensions were somewhat stronger in boys compared to girls. Conclusion The PAU-7S shows a good test-retest reliability and acceptable internal consistency. All dimensions of validity increased from poor/fair to moderate/good after calibration. The PAU-7S is a valid instrument for measuring MVPA in children and adolescents. Trial registration Trial registration numberISRCTN34251612.


2016 ◽  
Vol 24 (1) ◽  
pp. 54-68 ◽  
Author(s):  
Kathleen A. Calzone ◽  
Stacey Culp ◽  
Jean Jenkins ◽  
Sarah Caskey ◽  
Pamela B. Edwards ◽  
...  

Background and Purpose: Assessment of nursing genomic competency is critical given increasing genomic applications to health care. The study aims were to determine the test–retest reliability of the Genetics and Genomics in Nursing Practice Survey (GGNPS), which measures this competency, and to revise the survey accordingly. Methods: Registered nurses (n = 232) working at 2 Magnet-designated hospitals participating in a multiinstitutional genomic competency study completed the GGNPS. Cohen’s kappa and weighted kappa were used to measure the agreement of item responses between Time 1 and Time 2. Survey items were revised based on the results. Results: Mean agreement for the instrument was 0.407 (range = 0.150–1.000). Moderate agreement or higher was achieved in 39% of the items. Conclusions: GGNPS test–retest reliability was not optimal, and the instrument was refined based on the study findings. Further testing of the revised instrument is planned to assess the instrument performance.


2021 ◽  
Vol 75 (Supplement_2) ◽  
pp. 7512500023p1-7512500023p1
Author(s):  
Shu-Chun Lee ◽  
Yi-Ching Wu ◽  
David Leland Roberts ◽  
Kuang-Pei Tseng ◽  
Wen-Yin Chen

Abstract Date Presented 04/19/21 The Social Cognition Screening Questionnaire–Taiwan version (SCSQT) was designed to assess multiple domains of social cognition in people with schizophrenia in Taiwan. The SCSQT contains five subscales and provides estimates of the core domains of mentalizing and social perception and an overall social cognition score. Our validation of SCSQT indicated that the SCSQT had good test–retest reliability, acceptable random measurement error, and negligible practice effects. Primary Author and Speaker: Shu-Chun Lee Additional Authors and Speakers: Trudy Mallinson Contributing Authors: Alison M. Cogan, Ann Guernon, Katherine O'Brien, and Piper Hansen


2005 ◽  
Vol 11 (3) ◽  
pp. 338-342 ◽  
Author(s):  
Ruth Ann Marrie ◽  
Gary Cutter ◽  
Tuula Tyry ◽  
Olympia Hadjimichael ◽  
Timothy Vollmer

The North American Research Committee on Multiple Sclerosis (NARCOMS) Registry is a multiple sclerosis (MS) self-report registry with more than 24 000 participants. Participants report disability status upon enrolment, and semi-annually using Performance Scales (PS), Patient Determined Disease Steps (PDDS) and a pain question. In November 2000 and 2001, we also collected the Pain Effects Scale (PES). Our aim was to validate the NARCOMS pain question using the PES as our criterion measure. We measured correlations between the pain question and age, disease duration, various PS subscales and PDDS to assess construct validity. We correlated pain question responses in participants who reported no change in PDSS or the PS subscales between questionnaires to determine test—retest reliability. We measured responsiveness in participants who reported a substantial change in the sensory, spasticity PS subscales. The correlation between the pain question and PES was r=0.61 in November 2000, and r=0.64 in November 2001 (both P<0.0001). Correlations between the pain question and age, and disease duration were low, indicating divergent validity. Correlations between the pain question and spasticity, sensory PS subscales and PDSS were moderate, indicating convergent validity. Test—retest reliability was r=0.84 (P<0.0001). Responsiveness was 70.7%. The pain question is a valid self-report measure of pain in MS.


2003 ◽  
Vol 182 (4) ◽  
pp. 347-353 ◽  
Author(s):  
Fiona M. Cuthill ◽  
Colin A. Espie ◽  
Sally-Anne Cooper

BackgroundThere is no reliable and valid self-report measure of depressive symptoms for people with learning disabilities.AimsTo develop a scale for individuals with learning disability, and a supplementary scale for carers.MethodItems were generated from a range of assessment scales and through focus groups. A draft scale was piloted and field tested using matched groups of people with or without depression, and their carers. The scale was also administered to a group without learning disabilities for criterion validation.ResultsThe Glasgow Depression Scale for people with a Learning Disability (GDS–LD) differentiated depression and non-depression groups, correlated with the Beck Depression Inventory – II (r=0.88), had good test – retest reliability (r=0.97) and internal consistency (Cronbach's α=0.90), and a cut-off score (13) yielded 96% sensitivity and 90% specificity. The Carer Supplement was also reliable (r=0.98; α=0.88), correlating with the GDS–LD (r=0.93).ConclusionsBoth scales appear useful for screening, monitoring progress and contributing to outcome appraisal.


Sign in / Sign up

Export Citation Format

Share Document