Functional IT Complementarity and Hospital Performance in the United State: A Longitudinal Investigation

Author(s):  
Abhay Nath Mishra ◽  
Youyou Tao ◽  
Mark Keil ◽  
Jeong-ha (Cath) Oh

For healthcare practitioners and policymakers, one of the most challenging problems is understanding how to implement health information technology (HIT) applications in a way that yields the most positive impacts on quality and cost of care. We identify four clinical HIT functions which we label as order entry and management (OEM), decision support (DS), electronic clinical documentation (ECD), and results viewing (RV). We view OEM and DS as primary clinical functions and ECD and RV as support clinical functions. Our results show that no single combination of applications uniformly improves clinical and experiential quality and reduces cost for all hospitals. Thus, managers must assess which HIT interactions improve which performance metric under which conditions. Our results suggest that synergies can be realized when these systems are implemented simultaneously. Additionally, synergies can occur when support HIT is implemented before primary HIT and irrespective of the order in which primary HITs are implemented. Practitioners should also be aware that the synergistic effects of HITs and their impact on cost and quality are different for chronic and acute diseases. Our key message to top managers is to prioritize different combinations of HIT contingent on the performance variables they are targeting for their hospitals but also to realize that technology may not impact all outcomes.

Biomedicine ◽  
2021 ◽  
Vol 41 (3) ◽  
pp. 1
Author(s):  
Manjula Shantaram

Artificial intelligence (AI) is prepared to become a transformational force in healthcare. From chronic diseases and cancer to radiology and risk assessment, there are nearly endless opportunities to influence technology to install more precise, efficient, and impactful interventions at exactly the right moment in a patient’s care.AI offers a number of benefits over traditional analytics and clinical decision-making techniques.  Learning algorithms can become more specific and accurate as they interact with training data, allowing humans to gain unique insights into diagnostics, care processes, treatment variability, and patient outcomes (1).     Using computers to communicate is not a new idea by any means, but creating direct interfaces between technology and the human mind without the need for keyboards, mice, and monitors is a cutting-edge area of research that has significant applications for some patients. Neurological diseases and trauma to the nervous system can take away some patients’ abilities to speak, move, and interact meaningfully with people and their environments.  Brain-computer interfaces (BCIs) backed by artificial intelligence could restore those fundamental experiences to those who feared them lost forever. Brain-computer interfaces could drastically improve quality of life for patients with ALS, strokes, or locked-in syndrome, as well as the 500,000 people worldwide who experience spinal cord injuries every year (2).   Radiological images obtained by MRI machines, CT scanners, and x-rays offer non-invasive visibility into the inner workings of the human body.  But many diagnostic processes still rely on physical tissue samples obtained through biopsies, which carry risks including the potential for infection. AI will enable the next generation of radiology tools that are accurate and detailed enough to replace the need for tissue samples in some cases, experts predict. Diagnostic imaging team with the surgeon and the pathologist can be brought together which will be a big challenge (3).   Succeeding in the pursuit may allow clinicians to develop a more accurate understanding of how tumours behave as a whole instead of basing treatment decisions on the properties of a small segment of the malignancy. Providers may also be able to better define the aggressiveness of cancers and target treatments more appropriately. Artificial intelligence is helping to enable “virtual biopsies” and advance the innovative field of radiomics, which focuses on harnessing image-based algorithms to characterize the phenotypes and genetic properties of tumours (1).   Shortages of trained healthcare providers, including ultrasound technicians and radiologists can significantly limit access to life-saving care in developing nations around the world. AI could help mitigate the impacts of this severe deficit of qualified clinical staff by taking over some of the diagnostic duties typically allocated to humans (4).   For example, AI imaging tools can screen chest x-rays for signs of tuberculosis, often achieving a level of accuracy comparable to humans.  This capability could be deployed through an app available to providers in low-resource areas, reducing the need for a trained diagnostic radiologist on site.   However, algorithm developers must be careful to account for the fact that different ethnic groups or residents of different regions may have unique physiologies and environmental factors that will influence the presentation of disease.The course of a disease and population affected by the disease may look very different in India than in the US. As these algorithms are being developed,  it is very important to make sure that the data represents a diversity of disease presentations and populations. we cannot just develop an algorithm based on a single population and expect it to work as well on others (1).   Electronic health records (EHRs) have played an instrumental role in the healthcare industry’s journey towards digitalization, but the switch has brought myriad problems associated with cognitive overload, endless documentation, and user burnout. EHR developers are now using AI to create more intuitive interfaces and automate some of the routine processes that consume so much of a user’s time. Users spend the majority of their time on three tasks: clinical documentation, order entry, and sorting through the in-basket (5).   Voice recognition and dictation are helping to improve the clinical documentation process, but natural language processing (NLP) tools might not be going far enough. Video recording a clinical encounter would be helpful while using AI and machine learning to index those videos for future information retrieval. And it would be just like in the home, where we are using Siri and Alexa.  The future will bring virtual assistants to the bedside for clinicians to use with embedded intelligence for order entry(5). AI may also help to process routine requests from the inbox, like


Circulation ◽  
2020 ◽  
Vol 142 (1) ◽  
pp. 29-39 ◽  
Author(s):  
Ambarish Pandey ◽  
Neil Keshvani ◽  
Mary S. Vaughan-Sarrazin ◽  
Yubo Gao ◽  
Saket Girotra

Background: The utility of 30-day risk-standardized readmission rate (RSRR) as a hospital performance metric has been a matter of debate. Home time is a patient-centered outcome measure that accounts for rehospitalization, mortality, and postdischarge care. We aim to characterize risk-adjusted 30-day home time in patients with acute myocardial infarction (AMI) as a hospital-level performance metric and to evaluate associations with 30-day RSRR, 30-day risk-standardized mortality rate (RSMR), and 1-year RSMR. Methods: The study included 984 612 patients with AMI hospitalization across 2379 hospitals between 2009 and 2015 derived from 100% Medicare claims data. Home time was defined as the number of days alive and spent outside of a hospital, skilled nursing facility, or intermediate-/long-term acute care facility 30 days after discharge. Correlations between hospital-level risk-adjusted 30-day home time and 30-day RSRR, 30-day RSMR, and 1-year RSMR were estimated with the Pearson correlation. Reclassification in hospital performance using 30-day home time versus 30-day RSRR and 30-day RSMR was also evaluated. Results: Median hospital-level risk-adjusted 30-day home time was 24.0 days (range, 15.3–29.0 days). Hospitals with higher home time were more commonly academic centers, had available cardiac surgery and rehabilitation services, and had higher AMI volume and percutaneous coronary intervention use during the AMI hospitalization. Of the mean 30-day home time days lost, 58% were to intermediate-/long-term care or skilled nursing facility stays (4.7 days), 30% to death (2.5 days), and 12% to readmission (1.0 days). Hospital-level risk-adjusted 30-day home time was inversely correlated with 30-day RSMR ( r =−0.22, P <0.0001) and 30-day RSRR (r =−0.25, P <0.0001). Patients admitted to hospitals with higher risk-adjusted 30-day home time had lower 30-day readmission (quartile 1 versus 4, 21% versus 17%), 30-day mortality rate (5% versus 3%), and 1-year mortality rate (18% versus 12%). Furthermore, 30-day home time reclassified hospital performance status in ≈30% of hospitals versus 30-day RSRR and 30-day RSMR. Conclusions: Thirty-day home time for patients with AMI can be assessed as a hospital-level performance metric with the use of Medicare claims data. It varies across hospitals, is associated with postdischarge readmission and mortality outcomes, and meaningfully reclassifies hospital performance compared with the 30-day RSRR and 30-day RSMR metrics.


2017 ◽  
Vol 30 (2) ◽  
pp. 105-120 ◽  
Author(s):  
Aya Awad ◽  
Mohamed Bader–El–Den ◽  
James McNicholas

Over the past few years, there has been increased interest in data mining and machine learning methods to improve hospital performance, in particular hospitals want to improve their intensive care unit statistics by reducing the number of patients dying inside the intensive care unit. Research has focused on prediction of measurable outcomes, including risk of complications, mortality and length of hospital stay. The length of stay is an important metric both for healthcare providers and patients, influenced by numerous factors. In particular, the length of stay in critical care is of great significance, both to patient experience and the cost of care, and is influenced by factors specific to the highly complex environment of the intensive care unit. The length of stay is often used as a surrogate for other outcomes, where those outcomes cannot be measured; for example as a surrogate for hospital or intensive care unit mortality. The length of stay is also a parameter, which has been used to identify the severity of illnesses and healthcare resource utilisation. This paper examines a range of length of stay and mortality prediction applications in acute medicine and the critical care unit. It also focuses on the methods of analysing length of stay and mortality prediction. Moreover, the paper provides a classification and evaluation for the analytical methods of the length of stay and mortality prediction associated with a grouping of relevant research papers published in the years 1984 to 2016 related to the domain of survival analysis. In addition, the paper highlights some of the gaps and challenges of the domain.


Author(s):  
Jacob Krive ◽  
Joel S. Shoolin ◽  
Steven D. Zink

ObjectiveEvidence-based sets of medical orders for the treatment of patients with common conditions have the potential to induce greater efficiency and convenience across the system, along with more consistent health outcomes. Despite ongoing utilization of order sets, quantitative evidence of their effectiveness is lacking. In this study, conducted at Advocate Health Care in Illinois, we quantitatively analyzed the benefits of community acquired pneumonia order sets as measured by mortality, readmission, and length of stay (LOS) outcomes.MethodsIn this study, we examined five years (2007–2011) of computerized physician order entry (CPOE) data from two city and two suburban community care hospitals. Mortality and readmissions benefits were analyzed by comparing “order set” and “no order set” groups of adult patients using logistic regression, Pearson’s chi-squared, and Fisher’s exact methods. LOS was calculated by applying one-way ANOVA and the Mann-Whitney U test, supplemented by analysis of comorbidity via the Charlson Comorbidity Index.ResultsThe results indicate that patient treatment orders placed via electronic sets were effective in reducing mortality [OR=1.787; 95% CF 1.170-2.730; P=.061], readmissions [OR=1.362; 95% CF 1.015-1.827; P=.039], and LOS [F (1,5087)=6.885, P=.009, 4.79 days (no order set group) vs. 4.32 days (order set group)].ConclusionEvidence-based ordering practices have the potential to improve pneumonia outcomes through reduction of mortality, hospital readmissions, and cost of care. However, the practice must be part of a larger strategic effort to reduce variability in patient care processes. Further experimental and/or observational studies are required to reduce the barriers to retrospective patient care analyses.Keywords: evidence-based medicine, medication order sets, health outcomes research, pneumonia, computerized physician order entry (CPOE).


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Lukasz Smigielski ◽  
Michael Kometer ◽  
Milan Scheidegger ◽  
Rainer Krähenmann ◽  
Theo Huber ◽  
...  

Abstract Meditation and psychedelics have played key roles in humankind’s search for self-transcendence and personal change. However, neither their possible synergistic effects, nor related state and trait predictors have been experimentally studied. To elucidate these issues, we administered double-blind the model psychedelic drug psilocybin (315 μg/kg PO) or placebo to meditators (n = 39) during a 5-day mindfulness group retreat. Psilocybin increased meditation depth and incidence of positively experienced self-dissolution along the perception-hallucination continuum, without concomitant anxiety. Openness, optimism, and emotional reappraisal were predictors of the acute response. Compared with placebo, psilocybin enhanced post-intervention mindfulness and produced larger positive changes in psychosocial functioning at a 4-month follow-up, which were corroborated by external ratings, and associated with magnitude of acute self-dissolution experience. Meditation seems to enhance psilocybin’s positive effects while counteracting possible dysphoric responses. These findings highlight the interactions between non-pharmacological and pharmacological factors, and the role of emotion/attention regulation in shaping the experiential quality of psychedelic states, as well as the experience of selflessness as a modulator of behavior and attitudes. A better comprehension of mechanisms underlying most beneficial psychedelic experiences may guide therapeutic interventions across numerous mental conditions in the form of psychedelic-assisted applications.


Blood ◽  
2015 ◽  
Vol 126 (23) ◽  
pp. 2262-2262
Author(s):  
Bethany T Samuelson ◽  
Meredith Holmes ◽  
Andrew White ◽  
Emily Glynn ◽  
Daniel B Martin ◽  
...  

Abstract Background Heparin induced thrombocytopenia (HIT) is a rare but often considered diagnosis that requires treatment, in the form of costly parenteral anticoagulants, while awaiting the results of confirmatory testing. We hypothesized that improving the accuracy and consistency with which a patient's risk of HIT was determined, through the use of Computer-based Provider Order Entry (CPOE) interventions, would lead to decreased cost of care. Methods This study was conducted out of two affiliated US academic medical centers with a shared electronic medical record (EMR). A series of staged interventions, including provider and pharmacist education, real-time alerts and a CPOE based decision support tool were implemented as part of a multidisciplinary quality improvement project between January 1, 2013 and December 31, 2013. All inpatients ³18 years of age who underwent laboratory testing for HIT and/or were started on bivailrudin therapy for suspected HIT between January 1, 2012 and December 31, 2014 were included. For the purposes of our study, we defined the pre-intervention period as January 1 through December 31, 2012 and the post-intervention period as January 1 through December 31, 2014. The primary outcome was mean monthly bivalirudin expenditure at each institution. The secondary outcomes were number of HIT enzyme-linked immunosorbent assay (ELISA) and serotonin release assay (SRA) tests sent per month. Results We observed a statistically significant reduction in mean monthly bivalirudin expenditures from $64,178 to $17,704 (p = 0.0002) at one of the included centers and a decrease that approached significance from $28,275 to $16,708 (p = 0.100) at the other. Statistically significant reductions were also noted in mean monthly ELISA testing rates from 38.1 to 19.8 (p=0.01) and mean monthly SRA testing rates from 9.4 to 3.1 (p=0.0001) across both centers. Discussion Our findings suggest that the use of a computer-based order entry intervention, as part of a multidisciplinary quality improvement effort, can effectively reduce cost and decrease rates of lab testing in the management of heparin-induced thrombocytopenia. Such interventions are relatively low cost and of low complexity in institutions with established order entry systems and have the potential for a lasting impact on cost and quality of care. Disclosures No relevant conflicts of interest to declare.


2016 ◽  
Vol 124 (3) ◽  
pp. 743-749 ◽  
Author(s):  
Jacob K. Greenberg ◽  
Chad W. Washington ◽  
Ridhima Guniganti ◽  
Ralph G. Dacey ◽  
Colin P. Derdeyn ◽  
...  

OBJECT Hospital readmission is a common but controversial quality measure increasingly used to influence hospital compensation in the US. The objective of this study was to evaluate the causes for 30-day hospital readmission following aneurysmal subarachnoid hemorrhage (SAH) to determine the appropriateness of this performance metric and to identify potential avenues for improved patient care. METHODS The authors retrospectively reviewed the medical records of all patients who received surgical orendovas-cular treatment for aneurysmal SAH at Barnes-Jewish Hospital between 2003 and 2013. Two senior faculty identified by consensus the primary medical/surgical diagnosis associated with readmission as well as the underlying causes of rehospitalization. RESULTS Among 778 patients treated for aneurysmal SAH, 89 experienced a total of 97 readmission events, yielding a readmission rate of 11.4%. The median time from discharge to readmission was 9 days (interquartile range 3–17.5 days). Actual hydrocephalus or potential concern for hydrocephalus (e.g., headache) was the most frequent diagnosis (26/97, 26.8%), followed by infections (e.g., wound infection [5/97, 5.2%], urinary tract infection [3/97, 3.1%], and pneumonia [3/97, 3.1%]) and thromboembolic events (8/97, 8.2%). In most cases (75/97, 77.3%), we did not identify any treatment lapses contributing to readmission. The most common underlying causes for readmission were unavoidable development of SAH-related pathology (e.g., hydrocephalus; 36/97, 37.1%) and complications related to neurological impairment and immobility (e.g., thromboembolic event despite high-dose chemoprophylaxis; 21/97, 21.6%). The authors determined that 22/97 (22.7%) of the readmissions were likely preventable with alternative management. In these cases, insufficient outpatient medical care (for example, for hyponatremia; 16/97, 16.5%) was the most common shortcoming. CONCLUSIONS Most readmissions after aneurysmal SAH relate to late consequences of hemorrhage, such as hydrocephalus, or medical complications secondary to severe neurological injury. Although a minority of readmissions may potentially be avoided with closer medical follow-up in the transitional care environment, readmission after SAH is an insensitive and likely inappropriate hospital performance metric.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
John C.A.M. van Beers ◽  
Desirée H. van Dun ◽  
Celeste P.M. Wilderom

Purpose Lean implementations in hospitals tend to be lengthy or lack the desired results. In addressing the question, how can lean be implemented effectively in a hospital-wide setting, this paper aims to examine two opposing approaches. Design/methodology/approach The authors studied two Dutch university hospitals which engaged in different lean implementation approaches during the same four-year period: top-down vs bottom-up. Inductive qualitative analyses were made of 49 interviews; numerous documents; field notes; 13 frontline meeting observations; and objective hospital performance data. Longitudinally, the authors depict how the sequential events unfolded in both hospitals. Findings During the six implementation stages, the roles played by top, middle and frontline managers stood out. While the top managers of one hospital initiated the organization-wide implementation and then delegated it to others, the top managers of the other similar hospital merely tolerated the bottom-up lean activities. Eventually, only the hospital with the top-down approach achieved high organization-wide performance gains, but only in its fourth year after the top managers embraced lean in their own daily work practices and had started to co-create lean themselves. Then, the earlier developed lean infrastructure at the middle- and frontline ranks led to the desired hospital-wide lean implementation results. Originality/value Change-management insights, including basic tenets of social learning and goal-setting theory, are shown to advance the knowledge of effective lean implementation in hospitals. The authors found lean implementation “best-oiled” through role-modeling by top managers who use a phase-based process and engage in close cross-hierarchical or co-creative collaboration with middle and frontline managerial members.


2020 ◽  
Author(s):  
Ambarish Pandey ◽  
Neil Keshvani ◽  
Mary S. Vaughan-Sarrazin ◽  
Yubo Gao ◽  
Gregg C. Fonarow ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document