Predicting Patient Length of Stay Using Artificial Intelligence to Assist Health Care Professionals in Resource Planning and Scheduling Decisions

2022 ◽  
Vol 30 (8) ◽  
pp. 0-0

Artificial Intelligence (AI) significantly revolutionizes and transforms the global healthcare industry by improving outcomes, increasing efficiency, and enhancing resource utilization. The applications of AI impact every aspect of healthcare operation, particularly resource allocation and capacity planning. This study proposes a multi-step AI-based framework and applies it to a real dataset to predict the length of stay (LOS) for hospitalized patients. The results show that the proposed framework can predict the LOS categories with an AUC of 0.85 and their actual LOS with a mean absolute error of 0.85 days. This framework can support decision-makers in healthcare facilities providing inpatient care to make better front-end operational decisions, such as resource capacity planning and scheduling decisions. Predicting LOS is pivotal in today’s healthcare supply chain (HSC) systems where resources are scarce, and demand is abundant due to various global crises and pandemics. Thus, this research’s findings have practical and theoretical implications in AI and HSC management.

Author(s):  
Francesco Galofaro

AbstractThe paper presents a semiotic interpretation of the phenomenological debate on the notion of person, focusing in particular on Edmund Husserl, Max Scheler, and Edith Stein. The semiotic interpretation lets us identify the categories that orient the debate: collective/individual and subject/object. As we will see, the phenomenological analysis of the relation between person and social units such as the community, the association, and the mass shows similarities to contemporary socio-semiotic models. The difference between community, association, and mass provides an explanation for the establishment of legal systems. The notion of person we inherit from phenomenology can also be useful in facing juridical problems raised by the use of non-human decision-makers such as machine learning algorithms and artificial intelligence applications.


Author(s):  
Gabrielle Samuel ◽  
Jenn Chubb ◽  
Gemma Derrick

The governance of ethically acceptable research in higher education institutions has been under scrutiny over the past half a century. Concomitantly, recently, decision makers have required researchers to acknowledge the societal impact of their research, as well as anticipate and respond to ethical dimensions of this societal impact through responsible research and innovation principles. Using artificial intelligence population health research in the United Kingdom and Canada as a case study, we combine a mapping study of journal publications with 18 interviews with researchers to explore how the ethical dimensions associated with this societal impact are incorporated into research agendas. Researchers separated the ethical responsibility of their research with its societal impact. We discuss the implications for both researchers and actors across the Ethics Ecosystem.


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Joseph Friedman ◽  
Patrick Liu ◽  
Christopher E. Troeger ◽  
Austin Carter ◽  
Robert C. Reiner ◽  
...  

AbstractForecasts and alternative scenarios of COVID-19 mortality have been critical inputs for pandemic response efforts, and decision-makers need information about predictive performance. We screen n = 386 public COVID-19 forecasting models, identifying n = 7 that are global in scope and provide public, date-versioned forecasts. We examine their predictive performance for mortality by weeks of extrapolation, world region, and estimation month. We additionally assess prediction of the timing of peak daily mortality. Globally, models released in October show a median absolute percent error (MAPE) of 7 to 13% at six weeks, reflecting surprisingly good performance despite the complexities of modelling human behavioural responses and government interventions. Median absolute error for peak timing increased from 8 days at one week of forecasting to 29 days at eight weeks and is similar for first and subsequent peaks. The framework and public codebase (https://github.com/pyliu47/covidcompare) can be used to compare predictions and evaluate predictive performance going forward.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


2020 ◽  
Vol 79 (Suppl 1) ◽  
pp. 1294.1-1294
Author(s):  
C. Helin Hollstrand ◽  
K. Nilke Nordlund

Background:With the launch of The Swedish Young Rheumatics Report in April of 2018, we also presented new way of thinking and a tool called the Dreamscale, our complement to the traditional VAS scale used to assess pain. In October of 2018, we organized a workshop together with communication consults where we invited some of our members in different ages and health care professionals working with children, youths and young adults with rheumatic diseases, to try and reach a joint definition of what the Dreamscale is and could be, as we saw its huge potential. This is where the idea of the Dreamcatcher was born.Objectives:The objective is to create an innovative digital tool for young people with rheumatic disease. It takes its starting point in what is healthy and what is possible, rather than focusing on sickness and limitations. Using behavioral science, nudging and social functions, the Dreamcatcher has the potential to lower the barriers to living an active lifestyle, while also serving as a tool for dialogue with health care professionals, resulting in more efficient meetings, better resource planning and the gathering of valuable data to the national quality registers. It is also a digital tool with a big potential for development thanks to its open source code and its focus on enabling activity and participation, there is an obvious potential to develop its functions to also serve other actors and patient groups.Methods:We teamed up with communication bureau Gullers Grupp, pharmaceutical company Pfizer, and two health care clinics in Stockholm, one for children and youths with rheumatic disease and one for adults, and received funding for one year of development from Vinnova, the Swedish innovation authority, in April of 2019. We started the project by conducting a study to try and narrow down what focuses the Dreamcatcher should have. The pilot study contains both workshops with patients, both children, youths and young adults, and with teams of health care professionals, as well as more in-depth interviews with both patients and health care professionals. Based on the study, we will develop a prototype of what the Dreamcatcher could look like, and it will most likely be an application used for smartphones.Results:The study narrows down the Dreamcatcher into three things: the Dreamscale, Dream data, and the Dream collective.The Dreamscaleis as previously explained a complement to the traditional pain-scale and a tool for patients to set goals towards their dreams, and for patients and health care professionals to co-plan care and medical treatment based on what’s most important to the patient.Dream datais where patients can self-track their disease, data which is also available for the health care to view and therefore to be better prepared before meeting with the patient. It is also a goal to have the Dream data transferred to the national quality registers.The Dream collectiveis a social function where patients using the app can connect and get inspired by each other. It is a place to share your dreams and build a community to show that rheumatic disease isn’t something that should ever stop you from going after your dreams!Conclusion:The prototype of the Dreamcatcher will be presented in May of 2020 and we think this it has great potential to help shift focus withing health care, to not just focusing on sickness and limitations but rather on dreams, joy of life and possibilities!References:[1]https://ungareumatiker.se/nytt-digitalt-patientverktyg-unga-reumatiker-tar-fram-dromfangaren/[2]https://www.youtube.com/watch?v=zD6PwSKeb8IDisclosure of Interests:None declared


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


Author(s):  
R. K. Shah

<p>Accurate information of locations from visual aspect is vital for efficient resource planning and managing the workspace conflicts in the earthwork operations, which are missing in the existing linear schedules. Hence, the construction managers have to depend on the subjective decisions and intangible imagining for resources allocation, workspace conflicts and location-based progress monitoring in the earthwork projects. This has caused uncertainties in planning and scheduling of earthworks, and consequently delays and cost overruns of the projects. To overcome these issues, a framework of computer based prototype model was developed using the theory of location-based planning. This paper focuses on the case study experiments to demonstrate the functions of the model, which includes automatic generation of location-based earthwork schedules and visualisation of cut-fill locations on a weekly basis. The experiment results confirmed the model’s capability in identifying precise weekly locations of cut-fill and also visualising the time-space conflicts at the earthwork projects. Hence, the paper concludes that the model is a useful decision supporting tool to improve site productivity and reduce production cost of earthworks in the construction projects like roads and railways. </p><p><em>Journal of Advanced College of Engineering and Management, Vol. 1, 2015</em>, pp. 75-84</p>


2019 ◽  
Vol 12 (1) ◽  
pp. 159-175
Author(s):  
Elvis Kobina Donkoh ◽  
Rebecca Davis ◽  
Emmanuel D.J Owusu-Ansah ◽  
Emmanuel A. Antwi ◽  
Michael Mensah

Games happen to be a part of our contemporary culture and way of life. Often mathematical models of conflict and cooperation between intelligent rational decision-makers are studied in these games. Example is the African board game ’Zaminamina draft’ which is often guided by combinatorial strategies and techniques for winning. In this paper we deduce an intelligent mathematical technique for playing a winning game. Two different starting strategies were formulated; center starting and edge or vertex starting. The results were distorted into a 3x3 matrix and elementary row operations were performed to establish all possible wins. MatLab was used to distort the matrix to determine the diagonal wins. A program was written using python in artificial intelligence (AI) to help in playing optimally


Sign in / Sign up

Export Citation Format

Share Document