scholarly journals Psychometric Properties of Questionnaires on Functional Health Status in Oropharyngeal Dysphagia: A Systematic Literature Review

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Renée Speyer ◽  
Reinie Cordier ◽  
Berit Kertscher ◽  
Bas J Heijnen

Introduction. Questionnaires on Functional Health Status (FHS) are part of the assessment of oropharyngeal dysphagia.Objective. To conduct a systematic review of the literature on the psychometric properties of English-language FHS questionnaires in adults with oropharyngeal dysphagia.Methods. A systematic search was performed using the electronic databases Pubmed and Embase. The psychometric properties of the questionnaires were determined based on the COSMIN taxonomy of measurement properties and definitions for health-related patient-reported outcomes and the COSMIN checklist using preset psychometric criteria.Results. Three questionnaires were included: the Eating Assessment Tool (EAT-10), the Swallowing Outcome after Laryngectomy (SOAL), and the Self-report Symptom Inventory. The Sydney Swallow Questionnaire (SSQ) proved to be identical to the Modified Self-report Symptom Inventory. All FHS questionnaires obtained poor overall methodological quality scores for most measurement properties.Conclusions. The retrieved FHS questionnaires need psychometric reevaluation; if the overall methodological quality shows satisfactory improvement on most measurement properties, the use of the questionnaires in daily clinic and research can be justified. However, in case of insufficient validity and/or reliability scores, new FHS questionnaires need to be developed using and reporting on preestablished psychometric criteria as recommended in literature.

Author(s):  
P. Orlandoni ◽  
N. Jukic Peladic

Background: Oropharyngeal dysphagia negatively affects the quality of life of patients. It may lead to malnutrition, dehydration, aspiration pneumonia and death, especially in older people. Dysphagia and its level of severity have to be assessed accurately and in a timely fashion, because only early intervention can prevent the onset of complications. There are numerous self-administered questionnaires to monitor both the severity of dysphagia and the effectiveness of therapeutic approaches. The objective of this article is to conduct a literature review and to illustrate the characteristics of various self-assessment questionnaires for oropharyngeal dysphagia. Methods: A search of observational studies of adult populations with dysphagia, published from 1990 to June 2014, was performed in the electronic database Pubmed. Results: A total of 23 self-assessment questionnaires, on Health-related Quality of Life and Functional Health status, were identified. Fourteen questionnaires were excluded from the analysis for the following reasons: the questionnaire was written in a language other than English or Italian (n=3); the questionnaire was specific for caregivers (n=1); the questionnaires were not specific for oropharyngeal dysphagia (n=10). Nine questionnaires, validated in adult populations, were examined. Only two self-assessment questionnaires on quality of life - DHI (Dysphagia Handicap Index) and SWAL-QOL (Swallowing Quality Of Life) - were correctly validated; other questionnaires had methodological errors. Conclusions: A specific self-assessment questionnaire for older adults was not found. Almost all of the currently available questionnaires need to be improved methodologically. Furthermore, new questionnaires specifically for older people should be developed.


2005 ◽  
Vol 9 (3) ◽  
pp. 111-126 ◽  
Author(s):  
Theresa H. Chisolm ◽  
Harvey B. Abrams ◽  
Rachel McArdle ◽  
Richard H. Wilson ◽  
Patrick J. Doyle

2002 ◽  
Vol 13 (09) ◽  
pp. 493-502 ◽  
Author(s):  
Kenneth C. Pugh ◽  
Carl C. Crandell

This investigation examined the relations among hearing loss, handicap perception, and functional health status of 152 African American and Caucasian American seniors ranging in age from 60 to 89 years. Subjective measures were obtained from self-report scores on the Hearing Handicap Inventory for the Elderly (HHIE), the Medical Outcomes Study 36-ltem Short Form Health Survey (SF-36), and demographic profiles. Results indicated the following: (1) both subject groups exhibited nearly identical degrees of sensorineural hearing loss consistent with presbyacusis; (2) African American seniors reported significantly lower levels of completed education than did Caucasian American seniors; (3) differences between groups in self-report scores of hearing handicap (HHIE) were not statistically significant; (4) differences across groups in self-report scores of functional health status (SF-36) were not statistically significant; and (5) increasing levels of hearing loss produced significantly higher HHIE scores and significantly lower SF-36 scores in each group. These findings are discussed.


Author(s):  
Marco Fabbri ◽  
Alessia Beracci ◽  
Monica Martoni ◽  
Debora Meneo ◽  
Lorenzo Tonetti ◽  
...  

Sleep quality is an important clinical construct since it is increasingly common for people to complain about poor sleep quality and its impact on daytime functioning. Moreover, poor sleep quality can be an important symptom of many sleep and medical disorders. However, objective measures of sleep quality, such as polysomnography, are not readily available to most clinicians in their daily routine, and are expensive, time-consuming, and impractical for epidemiological and research studies., Several self-report questionnaires have, however, been developed. The present review aims to address their psychometric properties, construct validity, and factorial structure while presenting, comparing, and discussing the measurement properties of these sleep quality questionnaires. A systematic literature search, from 2008 to 2020, was performed using the electronic databases PubMed and Scopus, with predefined search terms. In total, 49 articles were analyzed from the 5734 articles found. The psychometric properties and factor structure of the following are reported: Pittsburgh Sleep Quality Index (PSQI), Athens Insomnia Scale (AIS), Insomnia Severity Index (ISI), Mini-Sleep Questionnaire (MSQ), Jenkins Sleep Scale (JSS), Leeds Sleep Evaluation Questionnaire (LSEQ), SLEEP-50 Questionnaire, and Epworth Sleepiness Scale (ESS). As the most frequently used subjective measurement of sleep quality, the PSQI reported good internal reliability and validity; however, different factorial structures were found in a variety of samples, casting doubt on the usefulness of total score in detecting poor and good sleepers. The sleep disorder scales (AIS, ISI, MSQ, JSS, LSEQ and SLEEP-50) reported good psychometric properties; nevertheless, AIS and ISI reported a variety of factorial models whereas LSEQ and SLEEP-50 appeared to be less useful for epidemiological and research settings due to the length of the questionnaires and their scoring. The MSQ and JSS seemed to be inexpensive and easy to administer, complete, and score, but further validation studies are needed. Finally, the ESS had good internal consistency and construct validity, while the main challenges were in its factorial structure, known-group difference and estimation of reliable cut-offs. Overall, the self-report questionnaires assessing sleep quality from different perspectives have good psychometric properties, with high internal consistency and test-retest reliability, as well as convergent/divergent validity with sleep, psychological, and socio-demographic variables. However, a clear definition of the factor model underlying the tools is recommended and reliable cut-off values should be indicated in order for clinicians to discriminate poor and good sleepers.


Author(s):  
Irena Boskovic ◽  
Thomas Merten ◽  
Harald Merckelbach

AbstractSome self-report symptom validity tests, such as the Self-Report Symptom Inventory (SRSI), rely on a detection strategy that uses bizarre, extreme, or very rare symptoms. Thus, items are constructed to invite respondents with an invalid response style to affirm pseudosymptoms that are usually not experienced by genuine patients. However, these pseudosymptoms should not be easily recognizable, because otherwise sophisticated over-reporters could strategically avoid them and go undetected. Therefore, we tested how well future psychology professionals were able to differentiate between genuine complaints and pseudosymptoms in terms of their plausibility and prevalence.Psychology students (N = 87) received the items of the SRSI online and were given the task to rate each item as to its plausibility and prevalence in the community.Students evaluated genuine symptoms as significantly more plausible and more prevalent than pseudosymptoms. However, 56% of students rated pseudosymptoms as moderately plausible, whereas 17% rated them as moderately prevalent in the general public.Overall, it appears that psychology students are successful in distinguishing bizarre, unusual, or rare symptoms from genuine complaints. Yet, the majority of students still attributed relatively high prima facie plausibility to pseudosymptoms. We contend that if such a trusting attitude is true for psychology students, it may also be the case for young psychology practitioners, which, consequently, may diminish the probability of employing self-report validity measures in psychological assessments.


2020 ◽  
Vol 32 (S1) ◽  
pp. 180-180
Author(s):  
Philippe Landreville ◽  
Alexandra Champagne ◽  
Patrick Gosselin

Background.The Geriatric Anxiety Inventory (GAI) is a widely used self-report measure of anxiety symptoms in older adults. Much research has been conducted on the psychometric properties of the GAI in various populations and using different language versions. Previous reviews of this literature have examined only a small proportion of studies in light of the body of research currently available and have not evaluated the methodological quality of this research. We conducted a systematic review of the psychometric properties of the GAI.Method.Relevant studies (N = 30) were retrieved through a search of electronic databases (Pubmed, PsycINFO, CINAHL, EMBASE and Google Scholar) and a hand search. The methodological quality of the included studies was assessed by two independent reviewers using the ‘‘COnsensusbased Standards for the selection of health status Measurement INstruments’’ (COSMIN) checklist.Results.Based on the COSMIN checklist, internal consistency and test reliability were mostly rated as poorly assessed (62.1% and 70% of studies, respectively) and quality of studies examining structural validity was mostly fair (60% of studies). The GAI showed adequate internal consistency and test-retest reliability. Convergent validity indices were highest with measures of generalized anxiety and lowest with instruments that include somatic symptoms. A substantial overlap with measures of depression was reported. While there was no consensus on the factorial structure of the GAI, several studies found it to be unidimensional.Conclusions.The GAI presents satisfactory psychometric properties. However, future efforts should aim to achieve a higher degree of methodological quality.


2013 ◽  
Vol 45 (4) ◽  
pp. 328-335 ◽  
Author(s):  
Xianwen Li ◽  
Qiyuan Lv ◽  
Chunyu Li ◽  
Hailian Zhang ◽  
Caifu Li ◽  
...  

2012 ◽  
Vol 47 (2) ◽  
pp. 221-223 ◽  
Author(s):  
Tamara C. Valovich McLeod ◽  
Candace Leach

Reference/Citation: Alla S, Sullivan SJ, Hale L, McCrory P. Self-report scales/checklists for the measurement of concussion symptoms: a systematic review. Br J Sports Med. 2009;43 (suppl 1):i3–i12. Clinical Question: Which self-report symptom scales or checklists are psychometrically sound for clinical use to assess sport-related concussion? Data Sources: Articles available in full text, published from the establishment of each database through December 2008, were identified from PubMed, Medline, CINAHL, Scopus, Web of Science, SPORTDiscus, PsycINFO, and AMED. Search terms included brain concussion, signs or symptoms, and athletic injuries, in combination with the AND Boolean operator, and were limited to studies published in English. The authors also hand searched the reference lists of retrieved articles. Additional searches of books, conference proceedings, theses, and Web sites of commercial scales were done to provide additional information about the psychometric properties and development for those scales when needed in articles meeting the inclusion criteria. Study Selection: Articles were included if they identified all the items on the scale and the article was either an original research report describing the use of scales in the evaluation of concussion symptoms or a review article that discussed the use or development of concussion symptom scales. Only articles published in English and available in full text were included. Data Extraction: From each study, the following information was extracted by the primary author using a standardized protocol: study design, publication year, participant characteristics, reliability of the scale, and details of the scale or checklist, including name, number of items, time of measurement, format, mode of report, data analysis, scoring, and psychometric properties. A quality assessment of included studies was done using 16 items from the Downs and Black checklist1 and assessed reporting, internal validity, and external validity. Main Results: The initial database search identified 421 articles. After 131 duplicate articles were removed, 290 articles remained and were added to 17 articles found during the hand search, for a total of 307 articles; of those, 295 were available in full text. Sixty articles met the inclusion criteria and were used in the systematic review. The quality of the included studies ranged from 9 to 15 points out of a maximum quality score of 17. The included articles were published between 1995 and 2008 and included a collective total of 5864 concussed athletes and 5032 nonconcussed controls, most of whom participated in American football. The majority of the studies were descriptive studies monitoring the resolution of concussive self-report symptoms compared with either a preseason baseline or healthy control group, with a smaller number of studies (n = 8) investigating the development of a scale. The authors initially identified 20 scales that were used among the 60 included articles. Further review revealed that 14 scales were variations of the Pittsburgh Steelers postconcussion scale (the Post-Concussion Scale, Post-Concussion Scale: Revised, Post-Concussion Scale: ImPACT, Post-Concussion Symptom Scale: Vienna, Graded Symptom Checklist [GSC], Head Injury Scale, McGill ACE Post-Concussion Symptoms Scale, and CogState Sport Symptom Checklist), narrowing down to 6 core scales, which the authors discussed further. The 6 core scales were the Pittsburgh Steelers Post-Concussion Scale (17 items), Post-Concussion Symptom Assessment Questionnaire (10 items), Concussion Resolution Index postconcussion questionnaire (15 items), Signs and Symptoms Checklist (34 items), Sport Concussion Assessment Tool (SCAT) postconcussion symptom scale (25 items), and Concussion Symptom Inventory (12 items). Each of the 6 core scales includes symptoms associated with sport-related concussion; however, the number of items on each scale varied. A 7-point Likert scale was used on most scales, with a smaller number using a dichotomous (yes/no) classification. Only 7 of the 20 scales had published psychometric properties, and only 1 scale, the Concussion Symptom Inventory, was empirically driven (Rasch analysis), with development of the scale occurring before its clinical use. Internal consistency (Cronbach α) was reported for the Post-Concussion Scale (.87), Post-Concussion Scale: ImPACT 22-item (.88–.94), Head Injury Scale 9-item (.78), and Head Injury Scale 16-item (.84). Test-retest reliability has been reported only for the Post-Concussion Scale (Spearman r = .55) and the Post-Concussion Scale: ImPACT 21-item (Pearson r = .65). With respect to validity, the SCAT postconcussion scale has demonstrated face and content validity, the Post-Concussion Scale: ImPACT 22-item and Head Injury Scale 9-item have reported construct validity, and the Head Injury Scale 9-item and 16-item have published factorial validity. Sensitivity and specificity have been reported only with the GSC (0.89 and 1.0, respectively) and the Post-Concussion Scale: ImPACT 21-item when combined with the neurocognitive component of ImPACT (0.819 and 0.849, respectively). Meaningful change scores were reported for the Post-Concussion Scale (14.8 points), Post-Concussion Scale: ImPACT 22-item (6.8 points), and Post-Concussion Scale: ImPACT 21-item (standard error of the difference = 7.17; 80% confidence interval = 9.18). Conclusions: Numerous scales exist for measuring the number and severity of concussion-related symptoms, with most evolving from the neuropsychology literature pertaining to head-injured populations. However, very few of these were created in a systematic manner that follows scale development processes and have published psychometric properties. Clinicians need to understand these limitations when choosing and using a symptom scale for inclusion in a concussion assessment battery. Future authors should assess the underlying constructs and measurement properties of currently available scales and use the ever-increasing prospective data pools of concussed athlete information to develop scales following appropriate, systematic processes.


Sign in / Sign up

Export Citation Format

Share Document