scholarly journals Insulin Bolus Calculator: Lessons Learned from Institutional Experience

2020 ◽  
pp. 193229682095072
Author(s):  
Valerie D. Nolt ◽  
Adrian Araya ◽  
Mohammed B. Ateya ◽  
Ming Chen ◽  
Jennifer Kelley ◽  
...  

Insulin bolus calculators have proven effective in improving glycemia and patient safety. Insulin calculators are increasingly being implemented for inpatient hospital care. Multidisciplinary teams are often involved in the design and review of the efficacy and utilization for these calculators. At times, unintended consequences and benefits of utilization are found on review. Integration of our insulin calculator into our electronic health record system was a multidisciplinary effort. During implementation, several obstacles to effective care were identified and are discussed in the following manuscript. We describe the barriers to utilization and potential pitfalls in clinical integration. We further describe benefits in patient education, time of insulin administration versus meal delivery, variations in insulin bolus for ketone correction, variation in care, and maximum bolus administration. Sharing lessons learned from experiences using electronic insulin calculator order sets will further our goals of improved patient care in the hospital setting.

2020 ◽  
Author(s):  
Fernando Gonçalves ◽  
Daniel G. Streicker ◽  
Mauro Galetti

Nowadays, restoration project might lead to increased public engagement and enthusiasm for biodiversity and is receiving increased media attention in major newspapers, TED talks and the scientific literature. However, empirical research on restoration project is rare, fragmented, and geographically biased and long-term studies that monitor indirect and unexpected effects are needed to support future management decisions especially in the Neotropical area. Changes in animal population dynamics and community composition following species (re)introduction may have unanticipated consequences for a variety of downstream ecosystem processes, including food web structure, predator-prey systems and infectious disease transmission. Recently, an unprecedented study in Brazil showed changes in vampire bat feeding following a rewilding project and further transformed the land-bridge island into a high-risk area for rabies transmission. Due the lessons learned from ongoing project, we present a novel approach on how to anticipate, monitor, and mitigate the vampire bats and rabies in rewilding projects. We pinpoint a series of precautions and the need for long-term monitoring of vampire bats and rabies responses to rewilding projects and highlighted the importance of multidisciplinary teams of scientist and managers focusing on prevention educational program of rabies risk transmitted by bats. In addition, monitoring the relative abundance of vampire bats, considering reproductive control by sterilization and oral vaccines that autonomously transfer among bats would reduce the probability, size and duration of rabies outbreaks. The rewilding assessment framework presented here responds to calls to better integrate the science and practice of rewilding and also could be used for long-term studying of bat-transmitted pathogen in the Neotropical area as the region is considered a geographic hotspots of “missing bat zoonoses”.


2015 ◽  
Vol 7 (3) ◽  
Author(s):  
Rudolf Urbanics ◽  
Péter Bedőcs ◽  
János Szebeni

AbstractPigs provide a sensitive and quantitative animal model of complement (C) activation-related pseudoallergy (CARPA) caused by liposomes and a wide range of nanoparticulate drugs or drug nanocarriers (nanomedicines). The tetrad of symptoms (hemodynamic, hematological, laboratory and skin changes) that arise within minutes after i.v. injection of reactogenic nanomedicines (RNMs) are highly reproducible among different pigs but the presence, direction and relative severity of symptoms are very different with different RNMs and their administration schedule. Bolus administration of RNMs usually trigger pulmonary hypertension with or without various degrees of systemic hyper- or hypotension, tachy-or bradycardia, arrhythmia, blood cell and inflammatory mediator changes and skin rash. These reactions can be rapid or protracted, and fully tachyphylactic, semi-tachyphylactic or non-tachyphylactic. Slow infusion usually diminishes the reactions and/or entail delayed, protracted and less severe hemodynamic and other changes. The goal of this review is to present some technical details of the porcine CARPA model, point out its constant and variable parameters, show examples of different reactions, highlight the unique features and capabilities of the model and evaluate its utility in preclinical safety assessment. The information obtained in this model enables the understanding of the complex pathomechanism of CARPA involving simultaneous anaphylatoxin and inflammatory mediator actions at multiple sites in different organs.


Author(s):  
Nora Abdelrahman Ibrahim

Terrorism and violent extremism have undoubtedly become among the top security concerns of the 21st century. Despite a robust agenda of counterterrorism since the September 11, 2001 attacks, the evolution of global terrorism has continued to outpace the policy responses that have tried to address it. Recent trends such as the foreign fighter phenomenon, the rampant spread of extremist ideologies online and within communities, and a dramatic increase in terrorist incidents worldwide, have led to a recognition that “traditional” counterterrorism efforts are insufficient and ineffective in combatting these phenomena. Consequently, the focus of policy and practice has shifted towards countering violent extremism by addressing the drivers of radicalization to curb recruitment to extremist groups. Within this context, the field of countering violent extremism (CVE) has garnered attention from both the academic and policy-making worlds. While the CVE field holds promise as a significant development in counterterrorism, its policy and practice are complicated by several challenges that undermine the success of its initiatives. Building resilience to violent extremism is continuously challenged by an overly securitized narrative and unintended consequences of previous policies and practices, including divisive social undercurrents like Islamophobia, xenophobia, and far-right sentiments. These by-products make it increasingly difficult to mobilize a whole of society response that is so critical to the success and sustainability of CVE initiatives. This research project addresses these policy challenges by drawing on the CVE strategies of Canada, the US, the UK, and Denmark to collect best practice and lessons learned in order to outline a way forward. 


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Anne-Marie Turcotte-Tremblay ◽  
Idriss Ali Gali Gali ◽  
Valéry Ridde

Abstract Background COVID-19 has led to the adoption of unprecedented mitigation measures which could trigger many unintended consequences. These unintended consequences can be far-reaching and just as important as the intended ones. The World Health Organization identified the assessment of unintended consequences of COVID-19 mitigation measures as a top priority. Thus far, however, their systematic assessment has been neglected due to the inattention of researchers as well as the lack of training and practical tools. Main text Over six years our team has gained extensive experience conducting research on the unintended consequences of complex health interventions. Through a reflexive process, we developed insights that can be useful for researchers in this area. Our analysis is based on key literature and lessons learned reflexively in conducting multi-site and multi-method studies on unintended consequences. Here we present practical guidance for researchers wishing to assess the unintended consequences of COVID-19 mitigation measures. To ensure resource allocation, protocols should include research questions regarding unintended consequences at the outset. Social science theories and frameworks are available to help assess unintended consequences. To determine which changes are unintended, researchers must first understand the intervention theory. To facilitate data collection, researchers can begin by forecasting potential unintended consequences through literature reviews and discussions with stakeholders. Including desirable and neutral unintended consequences in the scope of study can help minimize the negative bias reported in the literature. Exploratory methods can be powerful tools to capture data on the unintended consequences that were unforeseen by researchers. We recommend researchers cast a wide net by inquiring about different aspects of the mitigation measures. Some unintended consequences may only be observable in subsequent years, so longitudinal approaches may be useful. An equity lens is necessary to assess how mitigation measures may unintentionally increase disparities. Finally, stakeholders can help validate the classification of consequences as intended or unintended. Conclusion Studying the unintended consequences of COVID-19 mitigation measures is not only possible but also necessary to assess their overall value. The practical guidance presented will help program planners and evaluators gain a more comprehensive understanding of unintended consequences to refine mitigation measures.


Stroke ◽  
2013 ◽  
Vol 44 (suppl_1) ◽  
Author(s):  
Deb Motz ◽  
Dicky Huey ◽  
Tracy Moore ◽  
Byron Freemyer ◽  
Tommye Austin

Background: In 2008, a city with a population of over one million people had no organized stroke care or Certified Primary Stroke Centers. Patients presenting with stroke symptoms had inconsistent neurology coverage and little or no access to rtPA. The purpose is to describe steps taken for five acute-care hospitals (with one CMS provider number) to become Primary Stroke Certified. Methods: The journey began with administrative support and a commitment to provide the resources for a successful program. To oversee development, a Medical Director and Stroke Coordinator were appointed. To bridge the gap in available specialty physicians, partnerships were formed with a telemedicine group to provide emergency treatment and an academic medical center to augment the neurology and neuro-surgical coverage. Multidisciplinary teams met monthly in each facility. Representatives from each team formed a regional committee and an education council was created to share best practices and assure consistency across the system. Evidenced based order sets were developed using clinical practice guidelines. The Medical Executive Committee at each facility and ultimately the Medical Executive Board endorsed the order sets and mandated their use. Each facility chose the appropriate unit to cohort the stroke patients which encouraged expertise in care. Results: This journey resulted in a high functioning system of care. Baptist Health System became Joint Commission Certified in all five locations (May 2009). We were awarded the Get with the Guidelines Bronze Award (September 2010), the Silver Plus Award (July 2011) and the Gold Plus Award (July 2012). In addition, we were the first in Texas to achieve the Target Stroke Honor Roll (Q3 2011) and have maintained this status for eight consecutive quarters. Conclusion: In conclusion, administrative support is imperative to the success of a stroke program. Leadership, partnerships, committees, councils and staff involvement from the start drove the team to a successful certification process with outstanding outcomes. The stroke committees continue to meet monthly to analyze performance measures, identify opportunities for improvement and execute action plans.


2020 ◽  
Vol 20 (S4) ◽  
Author(s):  
Wakgari Deressa ◽  
Patrick Kayembe ◽  
Abigail H. Neel ◽  
Eric Mafuta ◽  
Assefa Seme ◽  
...  

Abstract Background Since its inception in 1988, the Global Polio Eradication Initiative (GPEI) has partnered with 200 countries to vaccinate over 2.5 billion children against poliomyelitis. The polio eradication approach has adapted to emerging challenges and diverse contexts. Knowledge assets gained from these experiences can inform implementation of future health programs, but only if efforts are made to systematically map barriers, identify strategies to overcome them, identify unintended consequences, and compare experiences across country contexts. Methods A sequential explanatory mixed methods design, including an online survey followed by key informant interviews (KIIs), was utilized to map tacit knowledge derived from the polio eradication experience from 1988 to 2019. The survey and KIIs were conducted between September 2018 and March 2019. A cross-case comparison was conducted of two study countries, the Democratic Republic of Congo (DRC) and Ethiopia, which fit similar epidemiological profiles for polio. The variables of interest (implementation barriers, strategies, unintended consequences) were compared for consistencies and inconsistencies within and across the two country cases. Results Surveys were conducted with 499 and 101 respondents, followed by 23 and 30 KIIs in the DRC and Ethiopia, respectively. Common implementation barriers included accessibility issues caused by political insecurity, population movement, and geography; gaps in human resources, supply chain, finance and governance; and community hesitancy. Strategies for addressing these barriers included adapting service delivery approaches, investing in health systems capacity, establishing mechanisms for planning and accountability, and social mobilization. These investments improved system infrastructure and service delivery; however, resources were often focused on the polio program rather than strengthening routine services, causing community mistrust and limiting sustainability. Conclusions The polio program investments in the DRC and Ethiopia facilitated program implementation despite environmental, system, and community-level barriers. There were, however, missed opportunities for integration. Remaining pockets of low immunization coverage and gaps in surveillance must be addressed in order to prevent importation of wild poliovirus and minimize circulating vaccine-derived poliovirus. Studying these implementation processes is critical for informing future health programs, including identifying implementation tools, strategies, and principles which can be adopted from polio eradication to ensure health service delivery among hard-to-reach populations. Future disease control or eradication programs should also consider strategies which reduce parallel structures and define a clear transition strategy to limit long-term external dependency.


1998 ◽  
Vol 37 (03) ◽  
pp. 285-293 ◽  
Author(s):  
C. J. Atkinson ◽  
V. J. Peel

AbstractThe benefits for any health care provider of successfully introducing an Electronic Patient Record System (EPRS) into their organisation can be considerable. It has the potential to enhance both clinical care and managerial processes, as well as producing more cost-effective care and care programmes across clinical disciplines and health care sectors. However, realising an EPRS's full potential can be a long and difficult process and should not be entered into lightly. Introducing an EPR System involves major personnel, organisational and technological changes. These changes must be interwoven and symbiotic and must be managed so that they grow together in stages towards a vision created and shared by all clinical professional staff, other staff, and managers in that process. The use of traditional “building” or “journey” metaphors inadequately reflects the complexity, uncertainty and, therefore, the unpredictability of the process. We propose that a more useful metaphor may be of “growing” a progressively more united, unified information system and health care organisation. We suggest this metaphor better recognises that the evolutionary process appears to be more organic than predictable and more systemic than mechanistic. An illustration is given of how these organisational clinical and technical issues might evolve and interweave in a hospital setting through a number of stages.


BMJ Open ◽  
2018 ◽  
Vol 8 (2) ◽  
pp. e018690 ◽  
Author(s):  
Charlotte A M Paddison ◽  
Gary A Abel ◽  
Jenni Burt ◽  
John L Campbell ◽  
Marc N Elliott ◽  
...  

ObjectivesTo examine patient consultation preferences for seeing or speaking to a general practitioner (GP) or nurse; to estimate associations between patient-reported experiences and the type of consultation patients actually received (phone or face-to-face, GP or nurse).DesignSecondary analysis of data from the 2013 to 2014 General Practice Patient Survey.Setting and participants870 085 patients from 8005 English general practices.OutcomesPatient ratings of communication and ‘trust and confidence’ with the clinician they saw.Results77.7% of patients reported wanting to see or speak to a GP, while 14.5% reported asking to see or speak to a nurse the last time they tried to make an appointment (weighted percentages). Being unable to see or speak to the practitioner type of the patients’ choice was associated with lower ratings of trust and confidence and patient-rated communication. Smaller differences were found if patients wanted a face-to-face consultation and received a phone consultation instead. The greatest difference was for patients who asked to see a GP and instead spoke to a nurse for whom the adjusted mean difference in confidence and trust compared with those who wanted to see a nurse and did see a nurse was −15.8 points (95% CI −17.6 to −14.0) for confidence and trust in the practitioner and −10.5 points (95% CI −11.7 to −9.3) for net communication score, both on a 0–100 scale.ConclusionsPatients’ evaluation of their care is worse if they do not receive the type of consultation they expect, especially if they prefer a doctor but are unable to see one. New models of care should consider the potential unintended consequences for patient experience of the widespread introduction of multidisciplinary teams in general practice.


2014 ◽  
Vol 13 (4) ◽  
pp. 1005-1011 ◽  
Author(s):  
Gabrielle Silver ◽  
Julia Kearney ◽  
Chani Traube ◽  
Margaret Hertzig

AbstractObjective:The recently validated Cornell Assessment for Pediatric Delirium (CAPD) is a new rapid bedside nursing screen for delirium in hospitalized children of all ages. The present manuscript provides a “developmental anchor points” reference chart, which helps ground clinicians' assessment of CAPD symptom domains in a developmental understanding of the presentation of delirium.Method:During the development of this CAPD screening tool, it became clear that clinicians need specific guidance and training to help them draw on their expertise in child development and pediatrics to improve the interpretative reliability of the tool and its accuracy in diagnosing delirium. The developmental anchor points chart was formulated and reviewed by a multidisciplinary panel of experts to evaluate content validity and include consideration of sick behaviors within a hospital setting.Results:The CAPD developmental anchor points for the key ages of newborn, 4 weeks, 6 weeks, 8 weeks, 28 weeks, 1 year, and 2 years served as the basis for training bedside nurses in scoring the CAPD for the validation trial and as a multifaceted bedside reference chart to be implemented within a clinical setting. In the current paper, we discuss the lessons learned during implementation, with particular emphasis on the importance of collaboration with the bedside nurse, the challenges of establishing a developmental baseline, and further questions about delirium diagnosis in children.Significance of Results:The CAPD with developmental anchor points provides a validated, structured, and developmentally informed approach to screening and assessment of delirium in children. With minimal training on the use of the tool, bedside nurses and other pediatric practitioners can reliably identify children at risk for delirium.


2018 ◽  
Vol 25 (2) ◽  
pp. 92-104 ◽  
Author(s):  
Ward Priestman ◽  
Shankar Sridharan ◽  
Helen Vigne ◽  
Richard Collins ◽  
Loretta Seamer ◽  
...  

BackgroundNumerous studies have examined factors related to success, failure and implications of electronic patient record (EPR) system implementations, but usually limited to specific aspects.ObjectiveTo review the published peer-reviewed literature and present findings regarding factors important in relation to successful EPR implementations and likely impact on subsequent clinical activity.MethodLiterature review.ResultsThree hundred and twelve potential articles were identified on initial search, of which 117 were relevant and included in the review. Several factors were related to implementation success, such as good leadership and management, infrastructure support, staff training and focus on workflows and usability. In general, EPR implementation is associated with improvements in documentation and screening performance and reduced prescribing errors, whereas there are minimal available data in other areas such as effects on clinical patient outcomes. The peer-reviewed literature appears to under-represent a range of technical factors important for EPR implementations, such as data migration from existing systems and impact of organisational readiness.ConclusionThe findings presented here represent the synthesis of data from peer-reviewed literature in the field and should be of value to provide the evidence-base for organisations considering how best to implement an EPR system.


Sign in / Sign up

Export Citation Format

Share Document