scholarly journals CPAS: the UK’s national machine learning-based hospital capacity planning system for COVID-19

2020 ◽  
Author(s):  
Zhaozhi Qian ◽  
Ahmed M. Alaa ◽  
Mihaela van der Schaar

AbstractThe coronavirus disease 2019 (COVID-19) global pandemic poses the threat of overwhelming healthcare systems with unprecedented demands for intensive care resources. Managing these demands cannot be effectively conducted without a nationwide collective effort that relies on data to forecast hospital demands on the national, regional, hospital and individual levels. To this end, we developed the COVID-19 Capacity Planning and Analysis System (CPAS)—a machine learning-based system for hospital resource planning that we have successfully deployed at individual hospitals and across regions in the UK in coordination with NHS Digital. In this paper, we discuss the main challenges of deploying a machine learning-based decision support system at national scale, and explain how CPAS addresses these challenges by (1) defining the appropriate learning problem, (2) combining bottom-up and top-down analytical approaches, (3) using state-of-the-art machine learning algorithms, (4) integrating heterogeneous data sources, and (5) presenting the result with an interactive and transparent interface. CPAS is one of the first machine learning-based systems to be deployed in hospitals on a national scale to address the COVID-19 pandemic—we conclude the paper with a summary of the lessons learned from this experience.

2018 ◽  
Vol 7 (2.28) ◽  
pp. 306
Author(s):  
Manu Kohli

For business enterprises, supplier evaluation is a mission critical process. On ERP (Enterprise Resource Planning) applications such as SAP, the supplier evaluation process is performed by configuring a linear score model, however this approach has a limited success. Therefore, author in this paper has proposed a two-stage supplier evaluation model by integrating data from SAP application and ML algorithms. In the first stage, author has applied data extraction algorithm on SAP application to build a data model comprising of relevant features. In the second stage, each instance in the data model is classified, on a rank of 1 to 6, based on the supplier performance measurements such as on-time, on quality and as promised quantity features. Thereafter, author has applied various machine learning algorithms on training sample with multi-classification objective to allow algorithm to learn supplier ranking classification. Encouraging test results were observed when learning algorithms,(DT) and Support Vector Machine (SVM), were tested with more than 98 percent accuracy on test data sets. The application of supplier evaluation model proposed in the paper can therefore be generalised to any other other information management system, not only limited to SAP, that manages Procure to Pay process.  


2016 ◽  
Vol 25 (S 01) ◽  
pp. S117-S29 ◽  
Author(s):  
L. Sacchi ◽  
J. H. Holmes

Summary Objectives: We sought to explore, via a systematic review of the literature, the state of the art of knowledge discovery in biomedical databases as it existed in 1992, and then now, 25 years later, mainly focused on supervised learning. Methods: We performed a rigorous systematic search of PubMed and latent Dirichlet allocation to identify themes in the literature and trends in the science of knowledge discovery in and between time periods and compare these trends. We restricted the result set using a bracket of five years previous, such that the 1992 result set was restricted to articles published between 1987 and 1992, and the 2015 set between 2011 and 2015. This was to reflect the current literature available at the time to researchers and others at the target dates of 1992 and 2015. The search term was framed as: Knowledge Discovery OR Data Mining OR Pattern Discovery OR Pattern Recognition, Automated. Results: A total 538 and 18,172 documents were retrieved for 1992 and 2015, respectively. The number and type of data sources increased dramatically over the observation period, primarily due to the advent of electronic clinical systems. The period 1992-2015 saw the emergence of new areas of research in knowledge discovery, and the refinement and application of machine learning approaches that were nascent or unknown in 1992. Conclusions: Over the 25 years of the observation period, we identified numerous developments that impacted the science of knowledge discovery, including the availability of new forms of data, new machine learning algorithms, and new application domains.Through a bibliometric analysis we examine the striking changes in the availability of highly heterogeneous data resources, the evolution of new algorithmic approaches to knowledge discovery, and we consider from legal, social, and political perspectives possible explanations of the growth of the field. Finally, we reflect on the achievements of the past 25 years to consider what the next 25 years will bring with regard to the availability of even more complex data and to the methods that could be, and are being now developed for the discovery of new knowledge in biomedical data.


2017 ◽  
Vol 7 (2) ◽  
pp. 76-94 ◽  
Author(s):  
Hussein Mohammed Alrabba ◽  
Muhannad Akram Ahmad

The paper seeks to counter several functionalities of the Enterprise Resource Planning system as brought up in the title. Essentially, the system’s role will be viewed in the perspective of regenerating better accounting practices in an advanced business setting and considering the size of the enterprise. However, a satisfying part of the paper attempts to bring out a clear depiction of the Enterprise Resource Planning paradigm/system as the main tool to take any credit made in the business accounts mechanics and base criteria. Subsequently, through this paper, all the roles of the tool at hand in enhancing accounting practices will substantially be played up. This research uses Jordan as the suitable setting for the realization of ERP’s comprehensive capabilities. An empirical research on Jordanian mining industry is used for sampling results as well as a theoretical critical review on the organisations adoption of the ERP system on their accounting systems (Naash & Khamis, 2009). Similarly, the Jordanian banks are briefly highlighted on a theoretical mode in phase of testing the both the alternative and null hypothesis. The empirical study is analyzed using a custom bucketing methodology on measuring the trends in the open-ended questions and attributed to efficiency. The latter are the variables tested on the open-ended questions. On the other hand, the closed questions are subjected to the analysis of variance (ANOVA) where the variances between the “yes” and “no” responses is checked. The two analytical approaches of the questionnaires yields are interrelated because of the homogeneity of the question types. Objectively, the null hypotheses Hₒ is tested by implying on the risk factors and challenges facing the system implementation in the organization; it is from the corresponding findings where the research infer its recommendations. The alternative hypotheses Hɪ implicates on the massive encroachments of ERP on the Jordanian Accounting sector. The proposition is thusly tested by the overall results from bucketing and ANOVA of Jordanian Bromine and Arab Potash companies conducted surveys. The research methodology quantitatively utilized Jordanian Bromine Company and Arab Potash Company companies to test whether the was any role played by Enterprise resource planning, commonly abbreviated as (ERP), system in advancing the country of Jordan towards universal standard accounting practices and accounting mechanisms. Notably, the data as per two studies relied on for feedback on the implementation and application of the ERP paradigm/system on the structure of the Jordanian Bromine Company and Arab Potash Company companies. The final result proved true the deduction that the overall ERP structure (Enterprise Resource Planning System) greatly impacted the accounting mechanisms and standards in the Jordanian organizations. Recommendations aimed at integrating different sectors in Jordan, including the Jordanian Bromine Company and Arab Potash Company companies with the banking sector and financial institutions so that the entire system can work collaboratively under the protocols, rules and requirements of the universal standard accounting practices and accounting mechanisms.


2021 ◽  
Vol 13 (18) ◽  
pp. 10239
Author(s):  
Farbod Farhangi ◽  
Abolghasem Sadeghi-Niaraki ◽  
Seyed Vahid Razavi-Termeh ◽  
Soo-Mi Choi

Drivers’ lack of alertness is one of the main reasons for fatal road traffic accidents (RTA) in Iran. Accident-risk mapping with machine learning algorithms in the geographic information system (GIS) platform is a suitable approach for investigating the occurrence risk of these accidents by analyzing the role of effective factors. This approach helps to identify the high-risk areas even in unnoticed and remote places and prioritizes accident-prone locations. This paper aimed to evaluate tuned machine learning algorithms of bagged decision trees (BDTs), extra trees (ETs), and random forest (RF) in accident-risk mapping caused by drivers’ lack of alertness (due to drowsiness, fatigue, and reduced attention) at a national scale of Iran roads. Accident points and eight effective criteria, namely distance to the city, distance to the gas station, land use/cover, road structure, road type, time of day, traffic direction, and slope, were applied in modeling, using GIS. The time factor was utilized to represent drivers’ varied alertness levels. The accident dataset included 4399 RTA records from March 2017 to March 2019. The performance of all models was cross-validated with five-folds and tree metrics of mean absolute error, mean squared error, and area under the curve of the receiver operating characteristic (ROC-AUC). The results of cross-validation showed that BDT and RF performance with an AUC of 0.846 were slightly more accurate than ET with an AUC of 0.827. The importance of modeling features was assessed by using the Gini index, and the results revealed that the road type, distance to the city, distance to the gas station, slope, and time of day were the most important, while land use/cover, traffic direction, and road structure were the least important. The proposed approach can be improved by applying the traffic volume in modeling and helps decision-makers take necessary actions by identifying important factors on road safety.


2020 ◽  
Author(s):  
Pascal J. Goldschmidt-Clermont

AbstractIn March of 2020, the COVID19 pandemic had expanded to the United States of America (US). Companies designated as “essential” for the US had to maintain productivity in spite of the growing threat created by the SARS-CoV-2 virus. With this report, we present the response of one such company, the Lennar Corporation, a major homebuilder in the US. Within days, Lennar had implemented a morning health check via its enterprise resource planning system, to identify associates (employees) who were sick, or not in their “usual state of health”. With this survey, Lennar was able to ensure that no one sick would show up to work, and instead, would self-quarantine at home. Furthermore, with thorough contact tracking, associates exposed to COVID19 patients (suspected or RT-PCR test-confirmed), were also asked to self-quarantine. This survey, in addition to other safety measures, such as an overhaul of the company with nearly 50% of the company working from home, prolific communication, and many more measures, Lennar was able to function safely for its associates and successfully as an enterprise. The data that we present here are “real world data” collected in the context of working throughout a dreadful pandemic, and the lessons learned could be helpful to other companies that are preparing to return to work.


Large data clustering and classification is a very challenging task in data mining. Various machine learning and deep learning systems have been proposed by many researchers on a different dataset. Data volume, data size and structure of data may affect the time complexity of the system. This paper described a new document object classification approach using deep learning (DL) and proposed a recurrent neural network (RNN) for classification with a micro-clustering approach.TF-IDF and a density-based approach are used to store the best features. The plane work used supervised learning method and it extracts features set called as BK of the desired classes. once the training part completed then proceeds to figure out the particular test instances with the help of the planned classification algorithm. Recurrent Neural Network categorized the particular test object according to their weights. The system can able to work on heterogeneous data set and generate the micro-clusters according to classified results. The system also carried out experimental analysis with classical machine learning algorithms. The proposed algorithm shows higher accuracy than the existing density-based approach on different data sets.


Author(s):  
Sai Hanuman Akundi ◽  
Soujanya R ◽  
Madhuri PM

In recent years vast quantities of data have been managed in various ways of medical applications and multiple organizations worldwide have developed this type of data and, together, these heterogeneous data are called big data. Data with other characteristics, quantity, speed and variety are the word big data. The healthcare sector has faced the need to handle the large data from different sources, renowned for generating large amounts of heterogeneous data. We can use the Big Data analysis to make proper decision in the health system by tweaking some of the current machine learning algorithms. If we have a large amount of knowledge that we want to predict or identify patterns, master learning would be the way forward. In this article, a brief overview of the Big Data, functionality and ways of Big data analytics are presented, which play an important role and affect healthcare information technology significantly. Within this paper we have presented a comparative study of algorithms for machine learning. We need to make effective use of all the current machine learning algorithms to anticipate accurate outcomes in the world of nursing.


2018 ◽  
pp. 1-12 ◽  
Author(s):  
Issam El Naqa ◽  
Michael R. Kosorok ◽  
Judy Jin ◽  
Michelle Mierzwa ◽  
Randall K. Ten Haken

Recently, there has been burgeoning interest in developing more effective and robust clinical decision support systems (CDSSs) for oncology. This has been primarily driven by the demands for more personalized and precise medical practice in oncology in the era of so-called big data (BD), an era that promises to harness the power of large-scale data flow to revolutionize cancer treatment. This interest in BD analytics has created new opportunities as well as new unmet challenges. These include: routine aggregation and standardization of clinical data, patient privacy, transformation of current analytical approaches to handle such noisy and heterogeneous data, and expanded use of advanced statistical learning methods on the basis of confluence of modern statistical methods and machine learning algorithms. In this review, we present the current status of CDSSs in oncology, the prospects and current challenges of BD analytics, and the promising role of integrated modern statistics and machine learning algorithms in predicting complex clinical end points, individualizing treatment rules, and optimizing dynamic personalized treatment regimens. We discuss issues pertaining to these topics and present application examples from an aggregate of experiences. We also discuss the role of human factors in improving the use and acceptance of such enhanced CDSSs and how to mitigate possible sources of human error to achieve optimal performance and wider acceptance.


Sign in / Sign up

Export Citation Format

Share Document