scholarly journals Assessing the Usability of Complex Psychosocial Interventions: The Intervention Usability Scale

2020 ◽  
Author(s):  
Aaron R Lyon ◽  
Michael D. Pullmann ◽  
Jedediah Jacobson ◽  
Katie Osterhage ◽  
Morhaf Al Achkar ◽  
...  

Abstract BackgroundUsability – the extent to which an intervention can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction – is a key determinant of implementation success. However, usability is rarely assessed in implementation research and no instruments have been developed to measure the design quality of complex health interventions, such as the evidence-based psychosocial interventions that characterize the majority of effective practices in mental and behavioral health services. This study evaluated the structural validity of the Intervention Usability Scale (IUS), an adapted version of the well-established System Usability Scale for digital technologies, when measuring the usability of complex health interventions. Prior studies of the original System Usability Scale have found both one- and two-factor solutions, both of which were examined in the current study of the IUS.MethodsA survey was administered to 205 healthcare professionals working at 11 primary care sites. Surveys collected demographic information, including each participant’s professional role (i.e., medical provider, mental/behavioral health provider, pharmacist), and IUS ratings for one of six common evidence-based psychosocial interventions (e.g., cognitive behavioral therapy, motivational interviewing) that they reported using most regularly. Factor analyses replicated the procedures used in prior research on the System Usability Scale, and a sensitivity analysis using analyses of variance compared IUS scores across different groups of respondents and interventions assessed.ResultsAnalyses indicated that a two-factor solution (with “usable” and “learnable” subscales) in which one item was removed best fit the data. This solution accounted for 52.6% of the variance observed. Inter-item reliabilities for the total score, usable subscale, and learnable subscale were α = .83, α = .82, and α = .63, respectively. Resulting scores indicated that usability ranged from below acceptable standards to good, depending on the intervention. On average, behavioral health providers found the interventions to be more usable that other types of healthcare providers.ConclusionsThe current study provides evidence for a two-factor IUS structure consistent with some prior research, as well as acceptable reliability and sensitivity to role and intervention. Future directions for implementation research evaluating the usability of complex health interventions are discussed.Contributions to the Literature• The ease with which interventions can be readily adopted by service providers is a key predictor of implementation success, but very little implementation research has attended to intervention usability.• No instruments exist to evaluate the usability of complex health interventions, such as the evidence-based practices that are commonly used to integrate mental and behavioral health services into primary care.• The current study evaluated the first instrument for assessing the usability of complex health interventions and found that its factor structure replicated some research with the original version of the instrument, a scale developed to assess the usability of digital systems.

2020 ◽  
Author(s):  
Aaron R Lyon ◽  
Michael D. Pullmann ◽  
Jedediah Jacobson ◽  
Katie Osterhage ◽  
Morhaf Al Achkar ◽  
...  

Abstract BackgroundUsability – the extent to which an intervention can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction – is a key determinant of implementation success. However, usability is rarely assessed in implementation research and no instruments have been developed to measure the design quality of complex health interventions, such as the evidence-based psychosocial interventions that characterize the majority of effective practices in mental and behavioral health services. This study evaluated the structural validity of the Intervention Usability Scale (IUS), an adapted version of the well-established System Usability Scale for digital technologies, when measuring the usability of complex health interventions. Prior studies of the original System Usability Scale have found both one- and two-factor solutions, both of which were examined in the current study of the IUS.MethodsA survey was administered to 205 healthcare professionals working at 11 primary care sites. Surveys collected demographic information, including each participant’s professional role (i.e., medical provider, mental/behavioral health provider, pharmacist), and IUS ratings for one of six common evidence-based psychosocial interventions (e.g., cognitive behavioral therapy, motivational interviewing) that they reported using most regularly. Factor analyses replicated the procedures used in prior research on the System Usability Scale, and a sensitivity analysis using analyses of variance compared IUS scores across different groups of respondents and interventions assessed.ResultsAnalyses indicated that a two-factor solution (with “usable” and “learnable” subscales) in which one item was removed best fit the data. This solution accounted for 52.6% of the variance observed. Inter-item reliabilities for the total score, usable subscale, and learnable subscale were α = .83, α = .82, and α = .63, respectively. Resulting scores indicated that usability ranged from below acceptable standards to good, depending on the intervention. On average, behavioral health providers found the interventions to be more usable that other types of healthcare providers.ConclusionsThe current study provides evidence for a two-factor IUS structure consistent with some prior research, as well as acceptable reliability and sensitivity to role and intervention. Future directions for implementation research evaluating the usability of complex health interventions are discussed.Contributions to the Literature• The ease with which interventions can be readily adopted by service providers is a key predictor of implementation success, but very little implementation research has attended to intervention usability.• No instruments exist to evaluate the usability of complex health interventions, such as the evidence-based practices that are commonly used to integrate mental and behavioral health services into primary care.• The current study evaluated the first instrument for assessing the usability of complex health interventions and found that its factor structure replicated some research with the original version of the instrument, a scale developed to assess the usability of digital systems.


2020 ◽  
Author(s):  
Aaron R Lyon ◽  
Michael D. Pullmann ◽  
Jedediah Jacobson ◽  
Katie Osterhage ◽  
Morhaf Al Achkar ◽  
...  

Abstract Background Usability – the extent to which an intervention can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction – is a key determinant of implementation success. However, usability is rarely assessed in implementation research and no instruments have been developed to measure the design quality of complex health interventions, such as the evidence-based psychosocial interventions that characterize the majority of effective practices in mental and behavioral health services. This study evaluated the structural validity of the Intervention Usability Scale (IUS), an adapted version of the well-established System Usability Scale for digital technologies, when measuring the usability of complex health interventions. Prior studies of the original System Usability Scale have found both one- and two-factor solutions, both of which were examined in the current study of the IUS. Methods A survey was administered to 205 healthcare professionals working at 11 primary care sites. Surveys collected demographic information, including each participant’s professional role (i.e., medical provider, mental/behavioral health provider, pharmacist), and IUS ratings for one of six common evidence-based psychosocial interventions (e.g., cognitive behavioral therapy, motivational interviewing) that they reported using most regularly. Factor analyses replicated the procedures used in prior research on the System Usability Scale, and a sensitivity analysis using analyses of variance compared IUS scores across different groups of respondents and interventions assessed. Results Analyses indicated that a two-factor solution (with “usable” and “learnable” subscales) in which one item was removed best fit the data. This solution accounted for 52.6% of the variance observed. Inter-item reliabilities for the total score, usable subscale, and learnable subscale were α = .83, α = .82, and α = .63, respectively. Resulting scores indicated that usability ranged from below acceptable standards to good, depending on the intervention. On average, behavioral health providers found the interventions to be more usable that other types of healthcare providers. Conclusions The current study provides evidence for a two-factor IUS structure consistent with some prior research, as well as acceptable reliability and sensitivity to role and intervention. Future directions for implementation research evaluating the usability of complex health interventions are discussed.


2021 ◽  
Vol 2 ◽  
pp. 263348952098782
Author(s):  
Aaron R Lyon ◽  
Michael D Pullmann ◽  
Jedediah Jacobson ◽  
Katie Osterhage ◽  
Morhaf Al Achkar ◽  
...  

Background: Usability—the extent to which an intervention can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction—may be a key determinant of implementation success. However, few instruments have been developed to measure the design quality of complex health interventions (i.e., those with several interacting components). This study evaluated the structural validity of the Intervention Usability Scale (IUS), an adapted version of the well-established System Usability Scale (SUS) for digital technologies, to measure the usability of a leading complex psychosocial intervention, Motivational Interviewing (MI), for behavioral health service delivery in primary care. Prior SUS studies have found both one- and two-factor solutions, both of which were examined in this study of the IUS. Method: A survey administered to 136 medical professionals from 11 primary-care sites collected demographic information and IUS ratings for MI, the evidence-based psychosocial intervention that primary-care providers reported using most often for behavioral health service delivery. Factor analyses replicated procedures used in prior research on the SUS. Results: Analyses indicated that a two-factor solution (with “usable” and “learnable” subscales) best fit the data, accounting for 54.1% of the variance. Inter-item reliabilities for the total score, usable subscale, and learnable subscale were α = .83, α = .84, and α = .67, respectively. Conclusion: This study provides evidence for a two-factor IUS structure consistent with some prior research, as well as acceptable reliability. Implications for implementation research evaluating the usability of complex health interventions are discussed, including the potential for future comparisons across multiple interventions and provider types, as well as the use of the IUS to evaluate the relationship between usability and implementation outcomes such as feasibility. Plain language abstract: The ease with which evidence-based psychosocial interventions (EBPIs) can be readily adopted and used by service providers is a key predictor of implementation success, but very little implementation research has attended to intervention usability. No quantitative instruments exist to evaluate the usability of complex health interventions, such as the EBPIs that are commonly used to integrate mental and behavioral health services into primary care. This article describes the evaluation of the first quantitative instrument for assessing the usability of complex health interventions and found that its factor structure replicated some research with the original version of the instrument, a scale developed to assess the usability of digital systems.


2021 ◽  
Vol 2 ◽  
pp. 263348952098825
Author(s):  
Cheri J Shapiro ◽  
Kathleen Watson MacDonell ◽  
Mariah Moran

Background: Among the many variables that affect implementation of evidence-based interventions in real-world settings, self-efficacy is one of the most important factors at the provider level of the social ecology. Yet, research on the construct of provider self-efficacy remains limited. Objectives: This scoping review was conducted to enhance understanding of the construct of provider self-efficacy and to examine how the construct is defined and measured in the context of implementation of evidence-based mental health interventions. Design: Online databases were used to identify 190 papers published from 1999 to June of 2018 that included search terms for providers, evidence-based, and self-efficacy. To be eligible for the scoping review, papers needed to focus on the self-efficacy of mental health providers to deliver evidence-based psychosocial interventions. A total of 15 publications were included in the review. Results: The construct of provider self-efficacy is not clearly defined but is typically described as confidence to deliver a specific intervention or practice. A range of measures are used to assess provider self-efficacy across both provider and intervention types. Conclusions: Standardized definition and measurement of provider self-efficacy is needed to advance practice and implementation research. Plain language abstract: Provider self-efficacy is known to influence implementation of evidence-based mental health interventions. However, the ways in which provider self-efficacy is defined and measured in implementation research literature is not well understood; furthermore, it is not clear what types of providers and interventions are represented in this literature. This scoping review adds to current research by revealing that there is no agreed upon definition or measure of provider self-efficacy in the context of implementation of evidence-based interventions, and that the research includes multiple types of providers (e.g., social workers, counselors, psychologists) and interventions. Self-efficacy appears to change as a function of training and support. To further research in this area, a common definition and agreed upon measures of this construct are needed.


Author(s):  
Keri J. S. Brady ◽  
Michelle P. Durham ◽  
Alex Francoeur ◽  
Cameron Henneberg ◽  
Avanti Adhia ◽  
...  

2021 ◽  
Vol 2 ◽  
pp. 263348952110188
Author(s):  
Byron J Powell ◽  
Kayne D Mettert ◽  
Caitlin N Dorsey ◽  
Bryan J Weiner ◽  
Cameo F Stanick ◽  
...  

Background: Organizational culture, organizational climate, and implementation climate are key organizational constructs that influence the implementation of evidence-based practices. However, there has been little systematic investigation of the availability of psychometrically strong measures that can be used to assess these constructs in behavioral health. This systematic review identified and assessed the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs as defined by the Consolidated Framework for Implementation Research (CFIR) and Ehrhart and colleagues. Methods: Data collection involved search string generation, title and abstract screening, full-text review, construct assignment, and citation searches for all known empirical uses. Data relevant to nine psychometric criteria from the Psychometric and Pragmatic Evidence Rating Scale (PAPERS) were extracted: internal consistency, convergent validity, discriminant validity, known-groups validity, predictive validity, concurrent validity, structural validity, responsiveness, and norms. Extracted data for each criterion were rated on a scale from −1 (“poor”) to 4 (“excellent”), and each measure was assigned a total score (highest possible score = 36) that formed the basis for head-to-head comparisons of measures for each focal construct. Results: We identified full measures or relevant subscales of broader measures for organizational culture ( n = 21), organizational climate ( n = 36), implementation climate ( n = 2), tension for change ( n = 2), compatibility ( n = 6), relative priority ( n = 2), organizational incentives and rewards ( n = 3), goals and feedback ( n = 3), and learning climate ( n = 2). Psychometric evidence was most frequently available for internal consistency and norms. Information about other psychometric properties was less available. Median ratings for psychometric properties across categories of measures ranged from “poor” to “good.” There was limited evidence of responsiveness or predictive validity. Conclusion: While several promising measures were identified, the overall state of measurement related to these constructs is poor. To enhance understanding of how these constructs influence implementation research and practice, measures that are sensitive to change and predictive of key implementation and clinical outcomes are required. There is a need for further testing of the most promising measures, and ample opportunity to develop additional psychometrically strong measures of these important constructs. Plain Language Summary Organizational culture, organizational climate, and implementation climate can play a critical role in facilitating or impeding the successful implementation and sustainment of evidence-based practices. Advancing our understanding of how these contextual factors independently or collectively influence implementation and clinical outcomes requires measures that are reliable and valid. Previous systematic reviews identified measures of organizational factors that influence implementation, but none focused explicitly on behavioral health; focused solely on organizational culture, organizational climate, and implementation climate; or assessed the evidence base of all known uses of a measure within a given area, such as behavioral health–focused implementation efforts. The purpose of this study was to identify and assess the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs that have been used in behavioral health-focused implementation research. We identified 21 measures of organizational culture, 36 measures of organizational climate, 2 measures of implementation climate, 2 measures of tension for change, 6 measures of compatibility, 2 measures of relative priority, 3 measures of organizational incentives and rewards, 3 measures of goals and feedback, and 2 measures of learning climate. Some promising measures were identified; however, the overall state of measurement across these constructs is poor. This review highlights specific areas for improvement and suggests the need to rigorously evaluate existing measures and develop new measures.


2018 ◽  
Vol 53 ◽  
pp. 1-11 ◽  
Author(s):  
Kyle Possemato ◽  
Emily M. Johnson ◽  
Gregory P. Beehler ◽  
Robyn L. Shepardson ◽  
Paul King ◽  
...  

2017 ◽  
Vol 42 (2) ◽  
pp. 108-116 ◽  
Author(s):  
Flavio F. Marsiglia ◽  
Patricia Dustman ◽  
Mary Harthun ◽  
Chelsea Coyne Ritland ◽  
Adriana Umaña-Taylor

Sign in / Sign up

Export Citation Format

Share Document