Quantification of Experienced Hearing Problems With Item Response Theory

2013 ◽  
Vol 22 (2) ◽  
pp. 252-262 ◽  
Author(s):  
Michelene Chenault ◽  
Martijn Berger ◽  
Bernd Kremer ◽  
Lucien Anteunis

Purpose The purpose of this study was to improve the effectiveness of adult hearing screens and demonstrate that interventions assessment methods are needed that address the individual's experienced hearing. Item response theory, which provides a methodology for assessing patient-reported outcomes, is examined here to demonstrate its usefulness in hearing screens and interventions. Method The graded response model is applied to a scale of 11 items assessing perceived hearing functioning and 10 items assessing experienced social limitations completed by a sample of 212 persons age 55+ years. Fixed and variable slope models are compared. Discrimination and threshold parameters are estimated and information functions evaluated. Results Variable slope models for both scales provided the best fit. The estimated discrimination parameters for all items except for one in each scale were good if not excellent (1.5–3.4). Threshold values varied, demonstrating the complementary and supplementary value of items within a scale. The information provided by each item varies relative to trait values so that each scale of items provides information over a wider range of trait values. Conclusion Item response theory methodology facilitates the comparison of items relative to their discriminative ability and information provided and thus provides a basis for the selection of items for application in a screening setting.

2020 ◽  
Vol 9 (11) ◽  
pp. 3754
Author(s):  
Yoshiaki Nomura ◽  
Toshiya Morozumi ◽  
Mitsuo Fukuda ◽  
Nobuhiro Hanada ◽  
Erika Kakuta ◽  
...  

Periodontal examination data have a complex structure. For epidemiological studies, mass screenings, and public health use, a simple index that represents the periodontal condition is necessary. Periodontal indices for partial examination of selected teeth have been developed. However, the selected teeth vary between indices, and a justification for the selection of examination teeth has not been presented. We applied a graded response model based on the item response theory to select optimal examination teeth and sites that represent periodontal conditions. Data were obtained from 254 patients who participated in a multicenter follow-up study. Baseline data were obtained from initial follow-up. Optimal examination sites were selected using item information calculated by graded response modeling. Twelve sites—maxillary 2nd premolar (palatal-medial), 1st premolar (palatal-distal), canine (palatal-medial), lateral incisor (palatal-central), central incisor (palatal-distal) and mandibular 1st premolar (lingual, medial)—were selected. Mean values for clinical attachment level, probing pocket depth, and bleeding on probing by full mouth examinations were used for objective variables. Measuring the clinical parameters of these sites can predict the results of full mouth examination. For calculating the periodontal index by partial oral examination, a justification for the selection of examination sites is essential. This study presents an evidence-based partial examination methodology and its modeling.


2005 ◽  
Vol 28 (3) ◽  
pp. 264-282 ◽  
Author(s):  
Chih-Hung Chang ◽  
Bryce B. Reeve

This article provides an overview of item response theory (IRT) models and how they can be appropriately applied to patient-reported outcomes (PROs) measurement. Specifically, the following topics are discussed: (a) basics of IRT, (b) types of IRT models, (c) how IRT models have been applied to date, and (d) new directions in applying IRT to PRO measurements.


2016 ◽  
Vol 59 (2) ◽  
pp. 373-383 ◽  
Author(s):  
J. Mirjam Boeschen Hospers ◽  
Niels Smits ◽  
Cas Smits ◽  
Mariska Stam ◽  
Caroline B. Terwee ◽  
...  

Purpose We reevaluated the psychometric properties of the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1995) using item response theory. Item response theory describes item functioning along an ability continuum. Method Cross-sectional data from 2,352 adults with and without hearing impairment, ages 18–70 years, were analyzed. They completed the AIADH in the web-based prospective cohort study “Netherlands Longitudinal Study on Hearing.” A graded response model was fitted to the AIADH data. Category response curves, item information curves, and the standard error as a function of self-reported hearing ability were plotted. Results The graded response model showed a good fit. Item information curves were most reliable for adults who reported having hearing disability and less reliable for adults with normal hearing. The standard error plot showed that self-reported hearing ability is most reliably measured for adults reporting mild up to moderate hearing disability. Conclusions This is one of the few item response theory studies on audiological self-reports. All AIADH items could be hierarchically placed on the self-reported hearing ability continuum, meaning they measure the same construct. This provides a promising basis for developing a clinically useful computerized adaptive test, where item selection adapts to the hearing ability of individuals, resulting in efficient assessment of hearing disability.


2016 ◽  
Vol 38 (4) ◽  
Author(s):  
Steven P. Reise

Item response theory (IRT) models emerged to solve practical testing problems in large-scale cognitive achievement and aptitude assessment. Within the last decade, an explosion of IRT applications have occurred in the non-cognitive domain. In this report, I highlight the development, implementation, and results of a single project: Patient Reported Outcomes Measurement Information Systems (PROMIS). The PROMIS projectreflects the state-of-the-art application of IRT in the non-cognitive domain, and has produced important advancements in patient reported outcomes measurement.However, the project also illustrates challenges that confront researchers wishing to apply IRT to non-cognitive constructs. These challenges are: a) selecting a population to set the metric for interpretation of item parameters, b) working with non-normal quasi-continuous latent traits, and c) working with narrow-bandwidth constructs that potentially have a limitedpool of potential indicators. Differences between cognitive and non-cognitive measurement contexts are discussed and directions for future research suggested.


2017 ◽  
Vol 78 (3) ◽  
pp. 384-408 ◽  
Author(s):  
Yong Luo ◽  
Hong Jiao

Stan is a new Bayesian statistical software program that implements the powerful and efficient Hamiltonian Monte Carlo (HMC) algorithm. To date there is not a source that systematically provides Stan code for various item response theory (IRT) models. This article provides Stan code for three representative IRT models, including the three-parameter logistic IRT model, the graded response model, and the nominal response model. We demonstrate how IRT model comparison can be conducted with Stan and how the provided Stan code for simple IRT models can be easily extended to their multidimensional and multilevel cases.


Sign in / Sign up

Export Citation Format

Share Document