scholarly journals Athlete Self-Report Measures in Research and Practice: Considerations for the Discerning Reader and Fastidious Practitioner

2017 ◽  
Vol 12 (s2) ◽  
pp. S2-127-S2-135 ◽  
Author(s):  
Anna E. Saw ◽  
Michael Kellmann ◽  
Luana C. Main ◽  
Paul B. Gastin

Athlete self-report measures (ASRM) have the potential to provide valuable insight into the training response; however, there is a disconnect between research and practice that needs to be addressed; namely, the measure or methods used in research are not always reflective of practice, or data primarily obtained from practice lacks empirical quality. This commentary reviews existing empirical measures and the psychometric properties required to be considered acceptable for research and practice. This information will allow discerning readers to make a judgment on the quality of ASRM data being reported in research papers. Fastidious practitioners and researchers are also provided with explicit guidelines for selecting and implementing an ASRM and reporting these details in research papers.

2020 ◽  
Vol 32 (S1) ◽  
pp. 180-180
Author(s):  
Philippe Landreville ◽  
Alexandra Champagne ◽  
Patrick Gosselin

Background.The Geriatric Anxiety Inventory (GAI) is a widely used self-report measure of anxiety symptoms in older adults. Much research has been conducted on the psychometric properties of the GAI in various populations and using different language versions. Previous reviews of this literature have examined only a small proportion of studies in light of the body of research currently available and have not evaluated the methodological quality of this research. We conducted a systematic review of the psychometric properties of the GAI.Method.Relevant studies (N = 30) were retrieved through a search of electronic databases (Pubmed, PsycINFO, CINAHL, EMBASE and Google Scholar) and a hand search. The methodological quality of the included studies was assessed by two independent reviewers using the ‘‘COnsensusbased Standards for the selection of health status Measurement INstruments’’ (COSMIN) checklist.Results.Based on the COSMIN checklist, internal consistency and test reliability were mostly rated as poorly assessed (62.1% and 70% of studies, respectively) and quality of studies examining structural validity was mostly fair (60% of studies). The GAI showed adequate internal consistency and test-retest reliability. Convergent validity indices were highest with measures of generalized anxiety and lowest with instruments that include somatic symptoms. A substantial overlap with measures of depression was reported. While there was no consensus on the factorial structure of the GAI, several studies found it to be unidimensional.Conclusions.The GAI presents satisfactory psychometric properties. However, future efforts should aim to achieve a higher degree of methodological quality.


2019 ◽  
Vol 36 (4) ◽  
pp. 595-616 ◽  
Author(s):  
Stefanie A. Wind

Differences in rater judgments that are systematically related to construct-irrelevant characteristics threaten the fairness of rater-mediated writing assessments. Accordingly, it is essential that researchers and practitioners examine the degree to which the psychometric quality of rater judgments is comparable across test-taker subgroups. Nonparametric procedures for exploring these differences are promising because they allow researchers and practitioners to examine important characteristics of ratings without potentially inappropriate parametric transformations or assumptions. This study illustrates a nonparametric method based on Mokken scale analysis (MSA) that researchers and practitioners can use to identify and explore differences in the quality of rater judgments between subgroups of test-takers. Overall, the results suggest that MSA provides insight into differences in rating quality across test-taker subgroups based on demographic characteristics. Differences in the degree to which raters adhere to basic measurement properties suggest that the interpretation of ratings may vary across subgroups. The implications of this study for research and practice are discussed.


2020 ◽  
Vol 1 ◽  
pp. 263348952093664 ◽  
Author(s):  
Kayne Mettert ◽  
Cara Lewis ◽  
Caitlin Dorsey ◽  
Heather Halko ◽  
Bryan Weiner

Background: Systematic reviews of measures can facilitate advances in implementation research and practice by locating reliable and valid measures and highlighting measurement gaps. Our team completed a systematic review of implementation outcome measures published in 2015 that indicated a severe measurement gap in the field. Now, we offer an update with this enhanced systematic review to identify and evaluate the psychometric properties of measures of eight implementation outcomes used in behavioral health care. Methods: The systematic review methodology is described in detail in a previously published protocol paper and summarized here. The review proceeded in three phases. Phase I, data collection, involved search string generation, title and abstract screening, full text review, construct assignment, and measure forward searches. Phase II, data extraction, involved coding psychometric information. Phase III, data analysis, involved two trained specialists independently rating each measure using PAPERS (Psychometric And Pragmatic Evidence Rating Scales). Results: Searches identified 150 outcomes measures of which 48 were deemed unsuitable for rating and thus excluded, leaving 102 measures for review. We identified measures of acceptability ( N = 32), adoption ( N = 26), appropriateness ( N = 6), cost ( N = 31), feasibility ( N = 18), fidelity ( N = 18), penetration ( N = 23), and sustainability ( N = 14). Information about internal consistency and norms were available for most measures (59%). Information about other psychometric properties was often not available. Ratings for internal consistency and norms ranged from “adequate” to “excellent.” Ratings for other psychometric properties ranged mostly from “poor” to “good.” Conclusion: While measures of implementation outcomes used in behavioral health care (including mental health, substance use, and other addictive behaviors) are unevenly distributed and exhibit mostly unknown psychometric quality, the data reported in this article show an overall improvement in availability of psychometric information. This review identified a few promising measures, but targeted efforts are needed to systematically develop and test measures that are useful for both research and practice. Plain language abstract: When implementing an evidence-based treatment into practice, it is important to assess several outcomes to gauge how effectively it is being implemented. Outcomes such as acceptability, feasibility, and appropriateness may offer insight into why providers do not adopt a new treatment. Similarly, outcomes such as fidelity and penetration may provide important context for why a new treatment did not achieve desired effects. It is important that methods to measure these outcomes are accurate and consistent. Without accurate and consistent measurement, high-quality evaluations cannot be conducted. This systematic review of published studies sought to identify questionnaires (referred to as measures) that ask staff at various levels (e.g., providers, supervisors) questions related to implementation outcomes, and to evaluate the quality of these measures. We identified 150 measures and rated the quality of their evidence with the goal of recommending the best measures for future use. Our findings suggest that a great deal of work is needed to generate evidence for existing measures or build new measures to achieve confidence in our implementation evaluations.


2020 ◽  
Vol 232 (03) ◽  
pp. 136-142
Author(s):  
Johanne Katrin Luz ◽  
Julia Martini ◽  
Katharina Clever ◽  
Peter Herschbach ◽  
Holger Christiansen ◽  
...  

Abstract Background Recent research shows that parents of children suffer from fear of progression (FoP), the fear of further disease progression. It is most possible that children also develop FoP, which could impair treatment and psychological health. The aim of this study is to adapt the adult’s version of the Fear of Progression Questionnaire – Short Form (FoP-Q-SF) for children and to examine the psychometric properties in pediatric cancer patients. Patients 32 pediatric cancer patients between 10 and 18 years with different diagnoses and in different treatment states participated in this study. Method In the cross-sectional study participants completed the adapted Fear of Progression Questionnaire – Short Form for Children (FoP-Q-SF/C) and self-report measures assessing quality of life, depression, fear and coping satisfaction. Results The questionnaire (FoP-Q-SF/C) showed adequate psychometric properties (Cronbachs α=0.86) and good results for construct validity. Significant medium to large correlations of children’s FoP was observed with quality of life (r=− 0.37), depression (r=0.52), fear (r=0.33 – 0.76), and satisfaction with coping (r=− 0.44). One-fifth of the sample was classified as having high FoP with values over 37. Conclusions The FoP-Q-SF/C is a short, economic questionnaire that is applicable in children with cancer. Clinicians can use the questionnaire to explore specific fear and the need for psychosocial support. Further research for specific treatment approaches for FoP in pediatric cancer patients are warranted.


Author(s):  
Hao Wu ◽  
Jonathan Corney ◽  
Michael Grant

Today there are numerous examples of collaborative online communities effectively creating innovative products (e.g., RepRap, Linux). But the potential of anonymous crowds to also engage in generative design, through the aggregation of many small contributions, is less clear. Although in recent years the “power of the crowd” has been repeatedly demonstrated in areas that range from image labelling to linguistic translation. The application of crowdsourcing in the fields of design research and creative innovation has been much slower to emerge. As a result, although there have been reports of systems and researchers using Internet crowdsourcing to carry out generative design, there are still many gaps in knowledge about the capability and limitations of the technology. For example on commercial crowdsourcing platforms, the relationship between remuneration and the final quality of designs has not been established, so it is unclear how much payment should be offered in order to ensure a particular standard of result. Key to investigating the relationship between the crowd’s remuneration and the value of their innovation is a robust method for quantifying the quality of the designs produced. This paper reports how payment for a design task (a 2D layout problem) was systematically varied and the quality of the output assessed through a separate crowdsourcing process. The work provides some interesting and valuable insight into how Crowdsourcing can be most effectively employed in design tasks.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 368-368
Author(s):  
Philippe Landreville ◽  
Alexandra Champagne ◽  
Patrick Gosselin

Abstract The Geriatric Anxiety Inventory (GAI) is a widely used self-report measure of anxiety symptoms in older adults. Although much research has been conducted on the psychometric properties of the GAI, previous reviews have examined only a small proportion of studies and have not evaluated the methodological quality of this work. In view of this, we conducted a systematic review of the psychometric properties of the GAI and it’s short form (GAI-SF). Relevant studies (N = 31) were retrieved through a search of electronic databases (Pubmed, PsycINFO, CINAHL, EMBASE and Google Scholar) and a hand search. The methodological quality of the included studies was assessed by two independent reviewers using the ‘‘COnsensus-based Standards for the selection of health status Measurement INstruments’’ (COSMIN) checklist. Based on the COSMIN checklist, internal consistency and test reliability were mostly rated as poorly assessed (63% and 72.7% of studies, respectively) and quality of studies examining structural validity was mostly fair (60% of studies). Both the GAI and GAI-SF showed adequate internal consistency and test-retest reliability. Convergent validity indices were highest with measures of generalized anxiety and lowest with instruments that include somatic symptoms. Substantial overlap with measures of depression was reported. While there is no consensus on the factorial structure of the GAI, the short version was found to be unidimensional. Our review therefore suggests that the GAI and GAI-SF have satisfactory psychometric properties while indicating that future efforts should aim to achieve a higher degree of methodological quality.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Habib Hadianfard ◽  
Behnaz Kiani ◽  
Mahla Azizzadeh Herozi ◽  
Fatemeh Mohajelin ◽  
John T. Mitchell

Abstract Background Research on the psychometric properties of the Persian self-report form of the Pediatric Quality of Life Inventory Version 4.0 (PedsQL 4.0) in adolescents has several gaps (e.g., convergent validity) that limit its clinical application and therefore the cross-cultural impact of this measure. This study aimed at investigating the psychometric properties of the PedsQL 4.0 and the effects of gender and age on quality of life in Iranian adolescents. Method The PedsQL 4.0 was administered to 326 adolescents (12–17 years). A subsample of 115 adolescents completed the scale two weeks after the first assessment. Confirmatory Factor Analysis (CFA), correlation of the PedsQL 4.0 with the Weiss Functional Impairment Rating Scale-Self-report (WFIRS-S), and Item Response Theory (IRT) analysis were conducted to examine validity. Cronbach’s alpha, McDonald’s Omega, and Intra class correlation (ICC) were calculated as well to examine reliability. Gender and age effects were also evaluated. Results Internal consistency and test–retest reliability of the total PedsQL 4.0 scale was .92 and .87, respectively. The PedsQL 4.0 scores showed negative moderate to strong correlations with the WFIRS-S total scale. The four-factor model of the PedsQL 4.0 was not fully supported by the CFA—the root mean square error of approximation and the comparative fit index showed a mediocre and poor fit, respectively. IRT analysis indicated that all items of the PedsQL 4.0 fit with the scale and most of them showed good discrimination. The items and total scale provided more information in the lower levels of the latent trait. Males showed significantly higher scores than females in physical and emotional functioning, psychosocial health, and total scale. Adolescents with lower ages showed better quality of life than those with higher ages in all scores of the PedsQL 4.0. Conclusion The PedsQL 4.0 showed good psychometric properties with regard to internal consistency, test–retest reliability, and convergent validity in Iranian adolescents, which supports its use in clinical settings among Persian-speaking adolescents. However, factor structure according to our CFA indicates that future work should address how to improve fit. In addition, studies that include PedsQL 4.0 should consider gender and age effects were reported.


Sign in / Sign up

Export Citation Format

Share Document