scholarly journals Development and validation of a self-reported questionnaire to assess occupational balance in parents of preterm infants

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259648
Author(s):  
Mona Dür ◽  
Anna Röschel ◽  
Christiane Oberleitner-Leeb ◽  
Verena Herrmanns ◽  
Elisabeth Pichler-Stachl ◽  
...  

Background Parents’ meaningful activities (occupations) and occupational balance are relevant to neonatal care. Valid and reliable self-reported measurement instruments are needed to assess parents’ occupational balance and to evaluate occupational balance interventions in neonatal care. The aims of this study were to develop a self-reported questionnaire on occupational balance in informal caregivers (OBI-Care) and to examine its measurement properties including construct validity and internal consistency. Methods and findings A mixed method multicenter study design was employed. Items of the OBI-Care were created with parents of preterm infants based on qualitative research methods. Measurement properties were analyzed with quantitative data of parents of preterm infants. Construct validity was assessed by determining dimensionality, overall and item fit to a Rasch model, differential item functioning and threshold ordering. Internal consistency was examined by determining inter-item and item-total correlations, Cronbach’s alpha and Rasch’s person separation index. Fourteen parents participated in item creation. Measurement properties were explored in data of 304 parents. Twenty-two items, summarized in three subscales were compiled to the OBI-Care. Items showed an overall fit and except one item, an item fit to the Rasch model. There was no evidence of differential item functioning and all items displayed ordered thresholds. Each subscale had good values of person separation indices and Cronbach’s alpha. Conclusions The OBI-Care demonstrates construct validity and internal consistency and is thus a suitable measurement instrument to assess occupational balance of parents of preterm infants in neonatal care. OBI-Care is generic and can be applied in various health care settings.

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261815
Author(s):  
Anna Röschel ◽  
Christina Wagner ◽  
Mona Dür

Objectives Informal caregivers often experience a restriction in occupational balance. The self-reported questionnaire on Occupational Balance in Informal Caregivers (OBI-Care) is a measurement instrument to assess occupational balance in informal caregivers. Measurement properties of the German version of the OBI-Care had previously been assessed in parents of preterm infants exclusively. Thus, the aim of this study was to examine the measurement properties of the questionnaire in a mixed population of informal caregivers. Methods A psychometric study was conducted, applying a multicenter cross-sectional design. Measurement properties (construct validity, internal consistency, and interpretability) of each subscale of the German version of the OBI-Care were examined. Construct validity was explored by assessing dimensionality, item fit and overall fit to the Rasch model, and threshold ordering. Internal consistency was examined with inter-item correlations, item-total correlations, Cronbach’s alpha, and person separation index. Interpretability was assessed by inspecting floor and ceiling effects. Results A total of 196 informal caregivers, 171 (87.2%) female and 25 (12.8%) male participated in this study. Mean age of participants was 52.27 (±12.6) years. Subscale 1 was multidimensional, subscale 2 and subscale 3 were unidimensional. All items demonstrated item fit and overall fit to the Rasch model and displayed ordered thresholds. Cronbach’s Alpha and person separation index values were excellent for each subscale. There was no evidence of ceiling or floor effects. Conclusions We identified satisfying construct validity, internal consistency, and interpretability. Thus, the findings of this study support the application of the German version of the OBI-Care to assess occupational balance in informal caregivers.


2021 ◽  
Vol 10 (2) ◽  
pp. 270-281
Author(s):  
P. Susongko ◽  
Y. Arfiani ◽  
M. Kusuma

The emergence of Differential Item Functioning (DIF) indicates an external bias in an item. This study aims to identify items at scientific literacy skills with integrated science (SLiSIS) test that experience DIF based on gender. Moreover, it is analyzed the emergence of DIF, especially related to the test construct measured, and concluded on how far the validity of the SLiSIS test from the construct validity of consequential type. The study was conducted with a quantitative approach by using a survey or non-experimental methods. The samples of this study were the responses of the SLiSIS test taken from 310 eleventh-grade high school students in the science program from SMA 2 and SMA 3 Tegal. The DIF analysis technique used Wald Test with the Rasch model. From the findings, eight items contained DIF in a 95 % level of trust. In 99 % level of trust, three items contained DIF, items 1, 6, and 38 or 7%. The DIF is caused by differences in test-takers ability following the measured construct, so it is not a test bias. Thus, the emergence of DIF on SLiSIS test items does not threaten the construct validity of the consequential type.


2013 ◽  
Vol 93 (11) ◽  
pp. 1507-1519 ◽  
Author(s):  
Clayon B. Hamilton ◽  
Bert M. Chesworth

Background The original 20-item Upper Extremity Functional Index (UEFI) has not undergone Rasch validation. Objective The purpose of this study was to determine whether Rasch analysis supports the UEFI as a measure of a single construct (ie, upper extremity function) and whether a Rasch-validated UEFI has adequate reproducibility for individual-level patient evaluation. Design This was a secondary analysis of data from a repeated-measures study designed to evaluate the measurement properties of the UEFI over a 3-week period. Methods Patients (n=239) with musculoskeletal upper extremity disorders were recruited from 17 physical therapy clinics across 4 Canadian provinces. Rasch analysis of the UEFI measurement properties was performed. If the UEFI did not fit the Rasch model, misfitting patients were deleted, items with poor response structure were corrected, and misfitting items and redundant items were deleted. The impact of differential item functioning on the ability estimate of patients was investigated. Results A 15-item modified UEFI was derived to achieve fit to the Rasch model where the total score was supported as a measure of upper extremity function only. The resultant UEFI-15 interval-level scale (0–100, worst to best state) demonstrated excellent internal consistency (person separation index=0.94) and test-retest reliability (intraclass correlation coefficient [2,1]=.95). The minimal detectable change at the 90% confidence interval was 8.1. Limitations Patients who were ambidextrous or bilaterally affected were excluded to allow for the analysis of differential item functioning due to limb involvement and arm dominance. Conclusion Rasch analysis did not support the validity of the 20-item UEFI. However, the UEFI-15 was a valid and reliable interval-level measure of a single dimension: upper extremity function. Rasch analysis supports using the UEFI-15 in physical therapist practice to quantify upper extremity function in patients with musculoskeletal disorders of the upper extremity.


Hand Therapy ◽  
2020 ◽  
Vol 25 (1) ◽  
pp. 3-10
Author(s):  
J Ikonen ◽  
S Hulkkonen ◽  
J Ryhänen ◽  
A Häkkinen ◽  
J Karppinen ◽  
...  

Introduction The construct validity of the Disabilities of the Arm, Shoulder and Hand questionnaire (DASH) has previously been questioned. The purpose of this study was to evaluate the measurement properties of the Finnish version of the DASH for assessing disability in patients with hand complaints using Rasch Measurement Theory. Methods A cohort of 193 patients with typical hand and wrist complaints were recruited at a surgery outpatient clinic. The DASH scores were analysed using the Rasch model for differential item functioning, unidimensionality, fit statistics, item residual correlation, coverage/targeting and reliability. Results In the original DASH questionnaire, the item response thresholds were disordered for 2 of 30 of the items. The item fit was poor for 9 of 30 of the items. Unidimensionality was not supported. There was substantial residual correlation between 87 pairs of items. Item reduction (chi square 95, degrees of freedom 50, p < 0.001) and constructing two testlets led to unidimensionality (chi square 0.64, degrees of freedom 4, p = 0.96). Person separation index was 0.95. The testlets had good fit with no differential item functioning towards age or gender. Conclusion Unidimensionality of the original Finnish version of the DASH was not supported, meaning the questionnaire seems to gauge traits other than disability alone. Hence, the clinician must be careful when trying to measure change in patients’ scores. Item reduction or the creation of testlets did not lead to good alternatives for the original Finnish DASH. Differential item functioning showed that the original Finnish scale exhibits minor response bias by age in one item. The original Finnish DASH covers different levels of ability well among typical hand surgery patients.


Author(s):  
Marco Fabbri ◽  
Alessia Beracci ◽  
Monica Martoni ◽  
Debora Meneo ◽  
Lorenzo Tonetti ◽  
...  

Sleep quality is an important clinical construct since it is increasingly common for people to complain about poor sleep quality and its impact on daytime functioning. Moreover, poor sleep quality can be an important symptom of many sleep and medical disorders. However, objective measures of sleep quality, such as polysomnography, are not readily available to most clinicians in their daily routine, and are expensive, time-consuming, and impractical for epidemiological and research studies., Several self-report questionnaires have, however, been developed. The present review aims to address their psychometric properties, construct validity, and factorial structure while presenting, comparing, and discussing the measurement properties of these sleep quality questionnaires. A systematic literature search, from 2008 to 2020, was performed using the electronic databases PubMed and Scopus, with predefined search terms. In total, 49 articles were analyzed from the 5734 articles found. The psychometric properties and factor structure of the following are reported: Pittsburgh Sleep Quality Index (PSQI), Athens Insomnia Scale (AIS), Insomnia Severity Index (ISI), Mini-Sleep Questionnaire (MSQ), Jenkins Sleep Scale (JSS), Leeds Sleep Evaluation Questionnaire (LSEQ), SLEEP-50 Questionnaire, and Epworth Sleepiness Scale (ESS). As the most frequently used subjective measurement of sleep quality, the PSQI reported good internal reliability and validity; however, different factorial structures were found in a variety of samples, casting doubt on the usefulness of total score in detecting poor and good sleepers. The sleep disorder scales (AIS, ISI, MSQ, JSS, LSEQ and SLEEP-50) reported good psychometric properties; nevertheless, AIS and ISI reported a variety of factorial models whereas LSEQ and SLEEP-50 appeared to be less useful for epidemiological and research settings due to the length of the questionnaires and their scoring. The MSQ and JSS seemed to be inexpensive and easy to administer, complete, and score, but further validation studies are needed. Finally, the ESS had good internal consistency and construct validity, while the main challenges were in its factorial structure, known-group difference and estimation of reliable cut-offs. Overall, the self-report questionnaires assessing sleep quality from different perspectives have good psychometric properties, with high internal consistency and test-retest reliability, as well as convergent/divergent validity with sleep, psychological, and socio-demographic variables. However, a clear definition of the factor model underlying the tools is recommended and reliable cut-off values should be indicated in order for clinicians to discriminate poor and good sleepers.


2021 ◽  
pp. 003022282110162
Author(s):  
Adalberto Campo-Arias ◽  
Andrés Felipe Tirado-Otálvaro ◽  
Isabel Álvarez-Solorza ◽  
Carlos Arturo Cassiani-Miranda

The study aimed to perform confirmatory factor analysis, internal consistency, gender differential item functioning, and discriminant validity of the Fear of COVID-5 Scale in emerging adult students of a university in Mexico. Confirmatory factor analysis, internal consistency (Cronbach's alpha and McDonald's omega), and gender differential item functioning were estimated (Kendall tau b correlation). The Fear of COVID-5 Scale showed a one-dimension structure (RMSEA = 0.07, CFI = 0.98, TLI = 0.96, and SRMR = 0.02), with high internal consistency (Cronbach's alpha of 0.78 and McDonald's omega of 0.81), non-gender differential item functioning (Kendall tau b between 0.07 and 0.10), and significant discriminant validity (Higher scores for fear of COVID-19 were observed in high clinical anxiety levels). In conclusion, the Fear of COVID-5 Scale presents a clear one-dimension structure similar to a previous study.


2007 ◽  
Vol 10 (3) ◽  
pp. 309-324 ◽  
Author(s):  
John Brodersen ◽  
David Meads ◽  
Svend Kreiner ◽  
Hanne Thorsen ◽  
Lynda Doward ◽  
...  

2016 ◽  
Vol 4 (1) ◽  
pp. 62
Author(s):  
Jose Q. Pedrajita

This study looked into differentially functioning items in a Chemistry Achievement Test. It also<br />examined the effect of eliminating differentially functioning items on the content and concurrent validity,<br />and internal consistency reliability of the test. Test scores of two hundred junior high school students<br />matched on school type were subjected to Differential Item Functioning (DIF) analysis. One hundred<br />students came from a public school, while the other 100 were private school examinees. The<br />descriptive-comparative research design utilizing differential item functioning analysis and validity and<br />reliability analysis was employed. The Chi-Square, Distractor Response Analysis, Logistic Regression,<br />and the Mantel-Haenszel Statistic were the methods used in the DIF analysis. A six-point scale ranging<br />from inadequate to adequate was used to assess the content validity of the test. Pearson r was used in<br />the concurrent validity analysis. The KR-20 formula was used for estimating the internal consistency<br />reliability of the test. The findings revealed the presence of differentially functioning items between the<br />public and private school examinees. The DIF methods differed in the number of differentially<br />functioning items identified. However, there was a high degree of correspondence between the Logistic<br />Regression and Mantel-Haenszel Statistic. After the elimination of the differentially functioning items,<br />the content and the concurrent validity, and the internal consistency reliability differed per DIF method<br />used. The content validity of the test differed ranging from slightly adequate to moderately adequate in<br />the number of items retained. The concurrent validity of the test also differed but all were positive and<br />indicate moderate relationship between the examinees’ test scores and their GPA in Science III.<br />Likewise, the internal consistency reliability of the test differed. The more differentially functioning<br />items eliminated, the lesser was the content and concurrent validity, and internal consistency reliability<br />of the test becomes. Elimination of differentially functioning items diminishes content and concurrent<br />validity, and internal consistency reliability, but could be use as basis in enhancing content, concurrent<br />as well as internal consistency reliability by replacing eliminated DIF items.


Sign in / Sign up

Export Citation Format

Share Document