scholarly journals Predicting implementation: comparing validated measures of intention and assessing the role of motivation when designing behavioral interventions

Author(s):  
Jessica Fishman ◽  
Viktor Lushin ◽  
David S. Mandell

Abstract Background Behavioral intention (which captures one’s level of motivation to perform a behavior) is considered a causal and proximal mechanism influencing the use of evidence-based practice (EBP). Implementation studies have measured intention differently, and it is unclear which is most predictive. Some use items referring to “evidence-based practice” in general, whereas others refer to a specific EBP. There are also unresolved debates about whether item stems should be worded “I intend to,” “I will,” or “How likely are you to” and if a single-item measure can suffice. Using each stem to refer to either a specific EBP or to “evidence-based practice,” this study compares the ability of these commonly used measures to predict future EBP implementation. The predictive validity is important for causal model testing and the development of effective implementation strategies. Methods A longitudinal study enrolled 70 teachers to track their use of two EBPs and compare the predictive validity of six different items measuring teachers’ intention. The measures differ by whether an item refers to a specific EBP, or to “evidence-based practices” in general, and whether the stem is worded in one of the three ways: “I intend to,” “I will,” or “How likely are you to.” For each item, linear regressions estimated the variance in future behavior explained. We also compared the predictive validity of a single item versus an aggregate of items by inter-correlating the items using different stems and estimating the explained variance in EBP implementation. Results Depending on the EBP and how intention was measured, the explained variance in implementation ranged from 3.5 to 29.0%. Measures that referred to a specific EBP, rather than “evidence-based practices” in general, accounted for more variance in implementation (e.g., 29.0% vs. 8.6%, and 11.3% vs. 3.5%). The predictive validity varied depending on whether stems were worded “I intend to,” “I will,” or “How likely are you to.” Conclusions The observed strength of the association between intentions and EBP use will depend on how intention is measured. The association was much stronger if an item referred to a specific EBP, rather than EBP in general. To predict implementation, the results support using an aggregate of two or three intention items that refer to the specific EBP. An even more pragmatic measure of intention consisting of a single item can also predict implementation. As discussed, the relationship will also vary depending on the EBP, which has direct implications for causal model testing and the design of implementation strategies.

2000 ◽  
Vol 23 (3) ◽  
pp. 186
Author(s):  
Evelyn Hovenga ◽  
Dawn Hay

This paper reports the results of a national event held during November 1998 to educate health professionalsrepresenting multiple disciplines about evidence-based practice (EBP), its implementation and the use of informaticsto support EBP. A combination of educational delivery methods and multimedia was used. Through local group work,participants identified obstacles to EBP implementation and developed strategies to overcome these in their own localenvironments. Major and common findings were a lack of management support and infrastructures needed for thesuccessful adoption of evidence-based practices by all.


2011 ◽  
Vol 46 (6) ◽  
pp. 655-664 ◽  
Author(s):  
Dorice A. Hankemeier ◽  
Bonnie L. Van Lunen

Context: Understanding implementation strategies of Approved Clinical Instructors (ACIs) who use evidence-based practice (EBP) in clinical instruction will help promote the use of EBP in clinical practice. Objective: To examine the perspectives and experiences of ACIs using EBP concepts in undergraduate athletic training education programs to determine the importance of using these concepts in clinical practice, clinical EBP implementation strategies for students, and challenges of implementing EBP into clinical practice while mentoring and teaching their students. Design: Qualitative study. Setting: Telephone interviews. Patients or Other Participants: Sixteen ACIs (11 men, 5 women; experience as a certified athletic trainer = 10 ± 4.7 years, experience as an ACI = 6.8 ± 3.9 years) were interviewed. Data Collection and Analysis: We interviewed each participant by telephone. Interview transcripts were analyzed and coded for common themes and subthemes regarding implementation strategies. Established themes were triangulated through peer review and member checking to verify the data. Results: The ACIs identified EBP implementation as important for validation of the profession, changing paradigm shift, improving patient care, and improving student educational experiences. They promoted 3 methods of implementing EBP concepts with their students: self-discovery, promoting critical thinking, and sharing information. They assisted students with the steps of EBP and often faced challenges in implementation of the first 3 steps of EBP: defining a clinical question, literature searching, and literature appraisal. Finally, ACIs indicated that modeling the behavior of making clinical decisions based on evidence was the best way to encourage students to continue using EBP. Conclusions: Athletic training education program directors should encourage and recommend specific techniques for EBP implementation in the clinical setting. The ACIs believed that role modeling is a strategy that can be used to promote the use of EBP with students. Training of ACIs should include methods by which to address the steps of the EBP process while still promoting critical thinking.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Sarah A. Birken ◽  
Graeme Currie

AbstractMiddle-level managers (MLMs; i.e., healthcare professionals who may fill roles including obtaining and diffusing information, adapting information and the intervention, mediating between strategy and day-to-day activities, and selling intervention implementation) have been identified as having significant influence on evidence-based practice (EBP) implementation. We argue that understanding whether and how MLMs influence EBP implementation is aided by drawing upon organization theory. Organization theories propose strategies for increasing MLMs’ opportunities to facilitate implementation by optimizing their appreciation of constructs which we argue have heretofore been treated separately to the detriment of understanding and facilitating implementation: EBPs, context, and implementation strategies. Specifically, organization theory encourages us to delineate different types of MLMs and consider how generalist and hybrid MLMs make different contributions to EBP implementation. Organization theories also suggest that MLMs’ understanding of context allows them to adapt EBPs to promote implementation and effectiveness; MLMs’ potential vertical linking pin role may be supported by increasing MLMs’ interactions with external environment, helping them to understand strategic pressures and opportunities; and how lateral connections among MLMs have the potential to optimize their contribution to EBP implementation as a collective force. We end with recommendations for practice and future research.


2021 ◽  
Vol 2 ◽  
pp. 263348952110160
Author(s):  
Callie Walsh-Bailey ◽  
Lorella G Palazzo ◽  
Salene MW Jones ◽  
Kayne D Mettert ◽  
Byron J Powell ◽  
...  

Background: Tailoring implementation strategies and adapting treatments to better fit the local context may improve their effectiveness. However, there is a dearth of valid, reliable, pragmatic measures that allow for the prospective tracking of strategies and adaptations according to reporting recommendations. This study describes the development and pilot testing of three tools to be designed to serve this purpose. Methods: Measure development was informed by two systematic reviews of the literature (implementation strategies and treatment adaptation). The three resulting tools vary with respect to the degree of structure (brainstorming log = low, activity log = moderate, detailed tracking log = high). To prospectively track treatment adaptations and implementation strategies, three stakeholder groups (treatment developer, implementation practitioners, and mental health providers) were randomly assigned one tool per week through an anonymous web-based survey for 12 weeks and incentivized to participate. Three established implementation outcome measures, the Acceptability of Intervention Measure, Intervention Appropriateness Measure, and Feasibility of Intervention Measure, were used to assess the tools. Semi-structured interviews were conducted to gather more nuanced information from stakeholders regarding their perceptions of the tools and the tracking process. Results: The three tracking tools demonstrated moderate to good acceptability, appropriateness, and feasibility; the activity log was deemed the most feasible of the three tools. Implementation practitioners rated the tools the highest of the three stakeholder groups. The tools took an average of 15 min or less to complete. Conclusion: This study sought to fill methodological gaps that prevent stakeholders and researchers from discerning which strategies are most important to deploy for promoting implementation and sustainment of evidence-based practices. These tools would allow researchers and practitioners to track whether activities were treatment adaptations or implementation strategies and what barrier(s) each targets. These tools could inform prospective tailoring of implementation strategies and treatment adaptations, which would promote scale out and spread. Plain Language Summary Strategies to support the implementation of evidence-based practices may be more successful if they are carefully customized based on local factors. Evidence-based practices themselves may be thoughtfully changed to better meet the needs of the settings and recipients. This study reports on a pilot study that aimed to create various types of tools to help individuals involved in implementation efforts track the actions they take to modify and implement interventions. These tools allow individuals to track the types of activities they are involved in, when the activities occurred, who was involved in the implementation efforts, and the reasons or rationale for the actions. The three tools in this study used a combination of open-ended and forced-response questions to test how the type of data recorded changed. Participants generally found the tools quick and easy to use and helpful in planning the delivery of an evidence-based practice. Most participants wanted more training in implementation science terminology and how to complete the tracking tools. Participating mental health providers would have liked more opportunities to review the data collected from the tools with their supervisors to use the data to improve the delivery of the evidence-based practice. These tools can help researchers, providers, and staff involved in implementation efforts to better understand what actions are needed to improve implementation success. Future research should address gaps identified in this study, such as the need to involve more participants in the tool development process.


2020 ◽  
Vol 29 (2) ◽  
pp. 688-704
Author(s):  
Katrina Fulcher-Rood ◽  
Anny Castilla-Earls ◽  
Jeff Higginbotham

Purpose The current investigation is a follow-up from a previous study examining child language diagnostic decision making in school-based speech-language pathologists (SLPs). The purpose of this study was to examine the SLPs' perspectives regarding the use of evidence-based practice (EBP) in their clinical work. Method Semistructured phone interviews were conducted with 25 school-based SLPs who previously participated in an earlier study by Fulcher-Rood et al. 2018). SLPs were asked questions regarding their definition of EBP, the value of research evidence, contexts in which they implement scientific literature in clinical practice, and the barriers to implementing EBP. Results SLPs' definitions of EBP differed from current definitions, in that SLPs only included the use of research findings. SLPs seem to discuss EBP as it relates to treatment and not assessment. Reported barriers to EBP implementation were insufficient time, limited funding, and restrictions from their employment setting. SLPs found it difficult to translate research findings to clinical practice. SLPs implemented external research evidence when they did not have enough clinical expertise regarding a specific client or when they needed scientific evidence to support a strategy they used. Conclusions SLPs appear to use EBP for specific reasons and not for every clinical decision they make. In addition, SLPs rely on EBP for treatment decisions and not for assessment decisions. Educational systems potentially present other challenges that need to be considered for EBP implementation. Considerations for implementation science and the research-to-practice gap are discussed.


Author(s):  
Gregory A. Aarons ◽  
Joanna C. Moullin ◽  
Mark G. Ehrhart

Both organizational characteristics and specific organizational strategies are important for the effective dissemination and implementation of evidence-based practices (EBPs) in health and allied health care settings, as well as mental health, alcohol/drug treatment, and social service settings. One of the primary goals of this chapter is to support implementers and leaders within organizations in attending to and shaping the context in which implementation takes place in order to increase the likelihood of implementation success and long-term sustainment. The chapter summarizes some of the most critical organizational factors and strategies likely to impact successful evidence-based practice implementation. There are myriad approaches to supporting organizational development and change—this chapter focuses on issues supported by relevant scientific literatures, particularly those germane to EBP implementation in health care and related settings.


2020 ◽  
Author(s):  
Emily R Haines ◽  
Alex Dopp ◽  
Aaron R. Lyon ◽  
Holly O. Witteman ◽  
Miriam Bender ◽  
...  

Abstract Background. Attempting to implement evidence-based practices in contexts for which they are not well-suited may compromise their fidelity and effectiveness or burden users (e.g., patients, providers, healthcare organizations) with elaborate strategies intended to force implementation. To improve the fit between evidence-based practices and contexts, implementation science experts have called for methods for adapting evidence-based practices and contexts, and tailoring implementation strategies; yet, methods for considering the dynamic interplay among evidence-based practices, contexts, and implementation strategies remain lacking. We argue that harmonizing the three can be accomplished with User-Centered Design, an iterative and highly stakeholder-engaged set of principles and methods. Methods. This paper presents a case example in which we used User-Centered Design methods and a three-phase User-Centered Design process to design a care coordination intervention for young adults with cancer. Specifically, we used usability testing to redesign an existing evidence-based practice (i.e., patient-reported outcome measure that served as the basis for intervention) to optimize usability and usefulness, an ethnographic user and contextual inquiry to prepare the context (i.e., comprehensive cancer center) to promote receptivity to implementation, and iterative prototyping workshops with a multidisciplinary design team to design the care coordination intervention and anticipate implementation strategies needed to enhance contextual fit. Results. Our User-Centered Design process resulted in the Young Adult Needs Assessment and Service Bridge (NA-SB), including a patient-reported outcome measure redesigned to promote usability and usefulness and a protocol for its implementation. By ensuring NA-SB directly responded to features of users and context, we designed NA-SB for implementation , potentially minimizing the strategies needed to address misalignment that may have otherwise existed. Furthermore, we designed NA-SB for scale-up ; by engaging users from other cancer programs across the country to identify points of contextual variation which would require flexibility in delivery, we created a tool not overly tailored to one unique context. Conclusions. User-Centered Design can help maximize usability and usefulness when designing evidence-based practices, preparing contexts, and informing implementation strategies- in effect, harmonizing evidence-based practices, contexts, and implementation strategies to promote implementation and effectiveness.


2007 ◽  
Vol 44 (3) ◽  
pp. 213-224 ◽  
Author(s):  
Charles A. Rapp ◽  
Diane Etzel-Wise ◽  
Doug Marty ◽  
Melinda Coffman ◽  
Linda Carlson ◽  
...  

Author(s):  
Sara Debus-Sherrill ◽  
Alex Breno ◽  
Faye S. Taxman

Research on staff and organizational factors that affect receptivity, adoption, feasibility, and utilization of innovations in justice settings is limited. This study uses survey data from 349 employees in one probation agency to assess how staff and perceived organizational factors influence attitudes related to evidence-based practices (EBPs) and their self-reported use. Staff characteristics, including education and knowledge about EBPs, and perceptions of the organization, including cynicism about the organization’s ability to change, predicted EBP outcomes. Staff age, tenure at the agency, and caseload size affected perceptions of organizational culture, but did not predict attitudes or use of EBPs. There is weak evidence for a relationship between self-reported use of EBPs with attitudinal support for EBPs, prior EBP training, and knowledge of EBPs. This study contributes to an emerging body of literature about the impact of various individual and organizational factors on support for EBPs with important lessons for implementation.


2020 ◽  
pp. 082957352097491
Author(s):  
Ryan L. Farmer ◽  
Imad Zaheer ◽  
Gary J. Duhon ◽  
Stephanie Ghazal

Through innovation in research and self-correction, it is inevitable that some practices will be replaced or be discredited for one reason or another. De-implementation of discredited and low-value practices is a necessary step for school psychologists’ maintenance of evidence-based practices and to reduce unnecessary costs and risk. However, efforts to clarify de-implementation frameworks and strategies are ongoing. The scope of this paper follows McKay et al. in considering the potential for de-implementation strategies to be informed by applied behavior analysis and operant learning theory. We conceptualize low-value practice as sets of behaviors evoked by their context and maintained by their consequences, and thus de-implementation as behavior reduction. We discuss the need for future research given this perspective.


Sign in / Sign up

Export Citation Format

Share Document