scholarly journals Implementation of clinical practice changes in the PICU: a qualitative study using and refining the iPARIHS framework

2020 ◽  
Author(s):  
Katherine M. Steffen ◽  
Laura M Holdsworth ◽  
Mackenzie Ford ◽  
Grace M. Lee ◽  
Steven M. Asch ◽  
...  

Abstract Background: Like in many settings, implementation of evidence-based practices often fall short in Pediatric Intensive Care Units (PICU). Very few prior studies have applied implementation science frameworks to understand how best to improve practices in this unique environment. We used the relatively new integrated Promoting Action on Research Implementation in Health Services (iPARIHS) framework to assess practice improvement in the PICU, and to explore the utility of the framework itself for that purpose.Methods: We used the iPARIHS framework to guide development of a semi-structured interview tool to examine barriers, facilitators, and the process of change in the PICU. A framework approach to qualitative analysis, developed around iPARIHS constructs and subconstructs, helped identify patterns and themes in provider interviews. We assessed the utility of iPARIHS to inform PICU practice change. Results: Fifty multi-professional providers working in 8 U.S. PICUs completed interviews. iPARIHS constructs shaped development of a process model for change that consisted of phases that include planning, a decision to adopt change, implementation and facilitation, and sustainability; the PICU environment shaped each phase. Large, complex multi-professional teams, and high-stakes work at near-capacity impaired receptivity to change. While unit leaders made decisions to pursue change, providers’ willingness to accept change was based on the evidence for the change, and provider’s experiences, beliefs, and capacity to integrate change into a demanding workflow. Limited analytic structures and resources frustrated attempts to monitor changes’ impacts. Variable provider engagement, time allocated to work on changes, and limited collaboration impacted facilitation. iPARIHS constructs were useful in exploring implementation, however we identified inter-relation of subconstructs, unique concepts not captured by the framework, and a need for subconstructs to further describe facilitation. Conclusions: The PICU environment significantly shaped implementation. The described process model for implementation may be useful to guide efforts to integrate changes and select implementation strategies. iPARIHS was adequate to identify barriers and facilitators of change, however further elaboration of subconstructs for facilitation would be helpful to operationalize the framework. Trial registration: not applicable, as no health care intervention was performed.

2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Katherine M. Steffen ◽  
Laura M. Holdsworth ◽  
Mackenzie A. Ford ◽  
Grace M. Lee ◽  
Steven M. Asch ◽  
...  

Abstract Background Like in many settings, implementation of evidence-based practices often fall short in pediatric intensive care units (PICU). Very few prior studies have applied implementation science frameworks to understand how best to improve practices in this unique environment. We used the relatively new integrated Promoting Action on Research Implementation in Health Services (iPARIHS) framework to assess practice improvement in the PICU and to explore the utility of the framework itself for that purpose. Methods We used the iPARIHS framework to guide development of a semi-structured interview tool to examine barriers, facilitators, and the process of change in the PICU. A framework approach to qualitative analysis, developed around iPARIHS constructs and subconstructs, helped identify patterns and themes in provider interviews. We assessed the utility of iPARIHS to inform PICU practice change. Results Fifty multi-professional providers working in 8 U.S. PICUs completed interviews. iPARIHS constructs shaped the development of a process model for change that consisted of phases that include planning, a decision to adopt change, implementation and facilitation, and sustainability; the PICU environment shaped each phase. Large, complex multi-professional teams, and high-stakes work at near-capacity impaired receptivity to change. While the unit leaders made decisions to pursue change, providers’ willingness to accept change was based on the evidence for the change, and provider’s experiences, beliefs, and capacity to integrate change into a demanding workflow. Limited analytic structures and resources frustrated attempts to monitor changes’ impacts. Variable provider engagement, time allocated to work on changes, and limited collaboration impacted facilitation. iPARIHS constructs were useful in exploring implementation; however, we identified inter-relation of subconstructs, unique concepts not captured by the framework, and a need for subconstructs to further describe facilitation. Conclusions The PICU environment significantly shaped the implementation. The described process model for implementation may be useful to guide efforts to integrate changes and select implementation strategies. iPARIHS was adequate to identify barriers and facilitators of change; however, further elaboration of subconstructs for facilitation would be helpful to operationalize the framework. Trial registration Not applicable, as no health care intervention was performed.


2021 ◽  
pp. 109442812110029
Author(s):  
Tianjun Sun ◽  
Bo Zhang ◽  
Mengyang Cao ◽  
Fritz Drasgow

With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.


2020 ◽  
Author(s):  
Emily R Haines ◽  
Alex Dopp ◽  
Aaron R. Lyon ◽  
Holly O. Witteman ◽  
Miriam Bender ◽  
...  

Abstract Background. Attempting to implement evidence-based practices in contexts for which they are not well-suited may compromise their fidelity and effectiveness or burden users (e.g., patients, providers, healthcare organizations) with elaborate strategies intended to force implementation. To improve the fit between evidence-based practices and contexts, implementation science experts have called for methods for adapting evidence-based practices and contexts, and tailoring implementation strategies; yet, methods for considering the dynamic interplay among evidence-based practices, contexts, and implementation strategies remain lacking. We argue that harmonizing the three can be accomplished with User-Centered Design, an iterative and highly stakeholder-engaged set of principles and methods. Methods. This paper presents a case example in which we used User-Centered Design methods and a three-phase User-Centered Design process to design a care coordination intervention for young adults with cancer. Specifically, we used usability testing to redesign an existing evidence-based practice (i.e., patient-reported outcome measure that served as the basis for intervention) to optimize usability and usefulness, an ethnographic user and contextual inquiry to prepare the context (i.e., comprehensive cancer center) to promote receptivity to implementation, and iterative prototyping workshops with a multidisciplinary design team to design the care coordination intervention and anticipate implementation strategies needed to enhance contextual fit. Results. Our User-Centered Design process resulted in the Young Adult Needs Assessment and Service Bridge (NA-SB), including a patient-reported outcome measure redesigned to promote usability and usefulness and a protocol for its implementation. By ensuring NA-SB directly responded to features of users and context, we designed NA-SB for implementation , potentially minimizing the strategies needed to address misalignment that may have otherwise existed. Furthermore, we designed NA-SB for scale-up ; by engaging users from other cancer programs across the country to identify points of contextual variation which would require flexibility in delivery, we created a tool not overly tailored to one unique context. Conclusions. User-Centered Design can help maximize usability and usefulness when designing evidence-based practices, preparing contexts, and informing implementation strategies- in effect, harmonizing evidence-based practices, contexts, and implementation strategies to promote implementation and effectiveness.


JAMIA Open ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 49-61
Author(s):  
Tera L Reynolds ◽  
Patricia R DeLucia ◽  
Karen A Esquibel ◽  
Todd Gage ◽  
Noah J Wheeler ◽  
...  

Abstract Objective To evaluate end-user acceptance and the effect of a commercial handheld decision support device in pediatric intensive care settings. The technology, pac2, was designed to assist nurses in calculating medication dose volumes and infusion rates at the bedside. Materials and Methods The devices, manufactured by InformMed Inc., were deployed in the pediatric and neonatal intensive care units in 2 health systems. This mixed methods study assessed end-user acceptance, as well as pac2’s effect on the cognitive load associated with bedside dose calculations and the rate of administration errors. Towards this end, data were collected in both pre- and postimplementation phases, including through ethnographic observations, semistructured interviews, and surveys. Results Although participants desired a handheld decision support tool such as pac2, their use of pac2 was limited. The nature of the critical care environment, nurses’ risk perceptions, and the usability of the technology emerged as major barriers to use. Data did not reveal significant differences in cognitive load or administration errors after pac2 was deployed. Discussion and Conclusion Despite its potential for reducing adverse medication events, the commercial standalone device evaluated in the study was not used by the nursing participants and thus had very limited effect. Our results have implications for the development and deployment of similar mobile decision support technologies. For example, they suggest that integrating the technology into hospitals’ existing IT infrastructure and employing targeted implementation strategies may facilitate nurse acceptance. Ultimately, the usability of the design will be essential to reaping any potential benefits.


2020 ◽  
pp. 082957352097491
Author(s):  
Ryan L. Farmer ◽  
Imad Zaheer ◽  
Gary J. Duhon ◽  
Stephanie Ghazal

Through innovation in research and self-correction, it is inevitable that some practices will be replaced or be discredited for one reason or another. De-implementation of discredited and low-value practices is a necessary step for school psychologists’ maintenance of evidence-based practices and to reduce unnecessary costs and risk. However, efforts to clarify de-implementation frameworks and strategies are ongoing. The scope of this paper follows McKay et al. in considering the potential for de-implementation strategies to be informed by applied behavior analysis and operant learning theory. We conceptualize low-value practice as sets of behaviors evoked by their context and maintained by their consequences, and thus de-implementation as behavior reduction. We discuss the need for future research given this perspective.


2016 ◽  
Vol 34 (7_suppl) ◽  
pp. 253-253
Author(s):  
Deborah L. Struth ◽  
Gail Mallory ◽  
Michele Galioto

253 Background: Clinically meaningful quality measures have been identified as a catalyst for healthcare improvement and better patient outcomes. Amidst rapidly changing quality reporting and re-imbursement schema, eligible providers struggle to choose a portfolio of measures across multiple registries that will demonstrate the value of their practice to consumers and payers. It is critical that a roadmap to quality improvement be evident to registry users. Utilizing the Model for Improvement developed by the Associates in Process Improvement and adapted to health care by the Institute for Healthcare Improvement, a framework to guide performance improvement was developed and incorporated into an oncology specific QCDR for PQRS reporting. Methods: Fourteen patient-centered quality measures with a focus on cancer related symptom assessment and intervention were piloted and tested in 40 practices and incorporated into the Oncology Nursing Society (ONS)/CE City QCDR. Six measures focus on the active treatment phase of cancer care and eight on breast cancer survivorship. The registry platform was designed with capabilities for tracking of data over time, goal setting, benchmarking, and providing suggested performance improvement (PI) activities. A technical expert panel (TEP) was convened to develop a model to guide PI activities to address QCDR identified practice gaps. Results: A quality improvement framework was developed to help QCDR subscribers answer the question “How Do I Improve” and was incorporated into the ONS/CE City QCDR platform. This framework provides the subscriber with the education and training necessary to improve care through use of quality improvement tools and implementation strategies aimed at practice change. Conclusions: It is essential that measures be incorporated into an infrastructure that provides opportunities for the assessment and improvement of care quality provided by practices. The QCDR can act as a means to drive performance improvement along with supporting quality measurement for PQRS and Meaningful Use reporting. The “How Do I Improve” Framework developed as part of the ONS/CE City QCDR platform provides a model to accomplish this goal.


2021 ◽  
Vol 9 ◽  
Author(s):  
Arnaud Orelle ◽  
Abdoulaye Nikiema ◽  
Arsen Zakaryan ◽  
Adilya A. Albetkova ◽  
Mark A. Rayfield ◽  
...  

The pervasive nature of infections causing major outbreaks have elevated biosafety and biosecurity as a fundamental component for resilient national laboratory systems. In response to international health security demands, the Global Health Security Agenda emphasizes biosafety as one of the prerequisites to respond effectively to infectious disease threats. However, biosafety management systems (BMS) in low-medium income countries (LMIC) remain weak due to fragmented implementation strategies. In addition, inefficiencies in implementation have been due to limited resources, inadequate technical expertise, high equipment costs, and insufficient political will. Here we propose an approach to developing a strong, self-sustaining BMS based on extensive experience in LMICs. A conceptual framework incorporating 15 key components to guide implementers, national laboratory leaders, global health security experts in building a BMS is presented. This conceptual framework provides a holistic and logical approach to the development of a BMS with all critical elements. It includes a flexible planning matrix with timelines easily adaptable to different country contexts as examples, as well as resources that are critical for developing sustainable technical expertise.


2021 ◽  
Author(s):  
Jacob T. Painter ◽  
Rebecca A. Raciborski ◽  
Monica M. Matthieu ◽  
Ciara Oliver ◽  
David A. Adkins ◽  
...  

Abstract Background: Successful implementation of evidence-based practices is key to healthcare quality improvement. However, it depends on appropriate selection of implementation strategies, the techniques that improve practice adoption or sustainment. When studying implementation of an evidence-based practice as part of a program evaluation, implementation scientists confront a challenge: the timing of strategy selection rarely aligns with the establishment of data collection protocols. Indeed, the exact implementation strategies used by an organization during a quality improvement initiative may be determined during implementation. Nevertheless, discernment of strategies is necessary to accurately estimate implementation effect and cost because this information can support decision making for sustainment, guide replication efforts, and inform the choice of implementation strategies for other evidence-based practices. Main body: We propose an iterative, stakeholder engaged process to discern implementation strategies when strategy choice was not made before data collection began. Stakeholders are centered in the process, providing a list of current and potential implementation activities. These activities are then mapped by an implementation science expert to an established taxonomy of implementation strategies. The mapping is then presented back to stakeholders for member checking and refinement. The final list can be used to survey those engaged in implementation activities in a language they are familiar with. A case study using this process is provided. Conclusion: It is challenging to estimate implementation effort when implementation strategy selection is disconnected from the data collection process. In these cases, a stakeholder-informed process to retrospectively identify implementation strategies by classifying activities performed using an established implementation strategy taxonomy provides the necessary information.


Author(s):  
Courtney T. Luecking ◽  
Cody D. Neshteruk ◽  
Stephanie Mazzucca ◽  
Dianne S. Ward

Previous efforts to involve parents in implementation of childcare-based health promotion interventions have yielded limited success, suggesting a need for different implementation strategies. This study evaluated the efficacy of an enhanced implementation strategy to increase parent engagement with Healthy Me, Healthy We. This quasi-experimental study included childcare centers from the second of two waves of a cluster-randomized trial. The standard approach (giving parents intervention materials, prompting participation at home, inviting participation with classroom events) was delivered in 2016–2017 (29 centers, 116 providers, and 199 parents). The enhanced approach (standard plus seeking feedback, identifying and addressing barriers to parent participation) was delivered in 2017–2018 (13 centers, 57 providers, and 114 parents). Parent engagement was evaluated at two levels. For the center-level, structured interview questions with providers throughout the intervention were systematically scored. For the parent-level, parents completed surveys following the intervention. Differences in parent engagement were evaluated using linear regression (center-level) and mixed effects (parent-level) models. Statistical significance was set at p < 0.025 for two primary outcomes. There was no difference in parent engagement between approaches at the center-level, β = −1.45 (95% confidence interval, −4.76 to 1.87), p = 0.38l. However, the enhanced approach had higher parent-level scores, β = 3.60, (95% confidence interval, 1.49 to 5.75), p < 0.001. In the enhanced approach group, providers consistently reported greater satisfaction with the intervention than parents (p < 0.001), yet their fidelity of implementing the enhanced approach was low (less than 20%). Results show promise that parent engagement with childcare-based health promotion innovations can positively respond to appropriately designed and executed implementation strategies, but strategies need to be feasible and acceptable for all stakeholders.


Sign in / Sign up

Export Citation Format

Share Document