Forecasting American COVID-19 Cases and Deaths through Machine Learning (Preprint)

2020 ◽  
Author(s):  
Anaiy Somalwar

UNSTRUCTURED COVID-19 has become a great national security problem for the United States and many other countries, where public policy and healthcare decisions are based on the several models for the prediction of the future deaths and cases of COVID-19. While the most commonly used models for COVID-19 include epidemiological models and Gaussian curve-fitting models, recent literature has indicated that these models could be improved by incorporating machine learning. However, within this research on potential machine learning models for COVID-19 forecasting, there has been a large emphasis on providing an array of different types of machine learning models rather than optimizing a single one. In this research, we suggest and optimize a linear machine learning model with a gradient-based optimizer for the prediction of future COVID-19 cases and deaths in the United States. We also suggest that a hybrid of a machine learning model for shorter range predictions and a Gaussian curve-fitting model or an epidemiological model for longer range predictions could greatly increase the accuracy of COVID-19 forecasting. INTERNATIONAL REGISTERED REPORT RR2-https://doi.org/10.1101/2020.08.13.20174631

2020 ◽  
Author(s):  
Anaiy Somalwar

COVID-19 has become a great national security problem for the United States and many other countries, where public policy and healthcare decisions are based on the several models for the prediction of the future deaths and cases of COVID-19. While the most commonly used models for COVID-19 include epidemiological models and Gaussian curve-fitting models, recent literature has indicated that these models could be improved by incorporating machine learning. However, within this research on potential machine learning models for COVID-19 forecasting, there has been a large emphasis on providing an array of different types of machine learning models rather than optimizing a single one. In this research, we suggest and optimize a linear machine learning model with a gradient-based optimizer for the prediction of future COVID-19 cases and deaths in the United States. We also suggest that a hybrid of a machine learning model for shorter range predictions and a Gaussian curve-fitting model or an epidemiological model for longer range predictions could greatly increase the accuracy of COVID-19 forecasting.


2016 ◽  
Vol 7 (2) ◽  
pp. 43-71 ◽  
Author(s):  
Sangeeta Lal ◽  
Neetu Sardana ◽  
Ashish Sureka

Logging is an important yet tough decision for OSS developers. Machine-learning models are useful in improving several steps of OSS development, including logging. Several recent studies propose machine-learning models to predict logged code construct. The prediction performances of these models are limited due to the class-imbalance problem since the number of logged code constructs is small as compared to non-logged code constructs. No previous study analyzes the class-imbalance problem for logged code construct prediction. The authors first analyze the performances of J48, RF, and SVM classifiers for catch-blocks and if-blocks logged code constructs prediction on imbalanced datasets. Second, the authors propose LogIm, an ensemble and threshold-based machine-learning model. Third, the authors evaluate the performance of LogIm on three open-source projects. On average, LogIm model improves the performance of baseline classifiers, J48, RF, and SVM, by 7.38%, 9.24%, and 4.6% for catch-blocks, and 12.11%, 14.95%, and 19.13% for if-blocks logging prediction.


Author(s):  
Terazima Maeda

Nowadays, there is a large number of machine learning models that could be used for various areas. However, different research targets are usually sensitive to the type of models. For a specific prediction target, the predictive accuracy of a machine learning model is always dependent to the data feature, data size and the intrinsic relationship between inputs and outputs. Therefore, for a specific data group and a fixed prediction mission, how to rationally compare the predictive accuracy of different machine learning model is a big question. In this brief note, we show how should we compare the performances of different machine models by raising some typical examples.


2020 ◽  
Vol 9 (3) ◽  
pp. 875
Author(s):  
Young Suk Kwon ◽  
Moon Seong Baek

The quick sepsis-related organ failure assessment (qSOFA) score has been introduced to predict the likelihood of organ dysfunction in patients with suspected infection. We hypothesized that machine-learning models using qSOFA variables for predicting three-day mortality would provide better accuracy than the qSOFA score in the emergency department (ED). Between January 2016 and December 2018, the medical records of patients aged over 18 years with suspected infection were retrospectively obtained from four EDs in Korea. Data from three hospitals (n = 19,353) were used as training-validation datasets and data from one (n = 4234) as the test dataset. Machine-learning algorithms including extreme gradient boosting, light gradient boosting machine, and random forest were used. We assessed the prediction ability of machine-learning models using the area under the receiver operating characteristic (AUROC) curve, and DeLong’s test was used to compare AUROCs between the qSOFA scores and qSOFA-based machine-learning models. A total of 447,926 patients visited EDs during the study period. We analyzed 23,587 patients with suspected infection who were admitted to the EDs. The median age of the patients was 63 years (interquartile range: 43–78 years) and in-hospital mortality was 4.0% (n = 941). For predicting three-day mortality among patients with suspected infection in the ED, the AUROC of the qSOFA-based machine-learning model (0.86 [95% CI 0.85–0.87]) for three -day mortality was higher than that of the qSOFA scores (0.78 [95% CI 0.77–0.79], p < 0.001). For predicting three-day mortality in patients with suspected infection in the ED, the qSOFA-based machine-learning model was found to be superior to the conventional qSOFA scores.


2021 ◽  
Author(s):  
Julio Alberto López-Gómez ◽  
Daniel Carrasco Pardo ◽  
Pablo Higueras ◽  
Jose María Esbrí ◽  
Saturnino Lorenzo

&lt;p&gt;Traditionally, prospectivity models were designed using approaches mainly based on expert judgement. These models have been widely applied and they are also known as knowledge-driven prospectivity models (see Harris et al. (2015)). Currently, artificial intelligence approaches, especially machine learning models, are being applied to build prospectivity models since they have been proven to be successful in many other domains (see Sun et al., 2019 and Guerra Prado et al., 2020). They are also known as data-driven prospectivity models. Machine learning models allow to learn from data repositories in order to extract and detect relationships from the data to predict new instances.&lt;/p&gt;&lt;p&gt;In this work, a geological dataset was collected by a team of expert geologists. The data collected includes the geographical coordinates as well as several geological features of points belonged to seventy-seven different mercury deposits in the Almad&amp;#233;n mining district. The resulting dataset is composed by a total of 24798 points and 24 attributes for each point. In particular, we have collected geological and mining-related data regarding the Almad&amp;#233;n mercury (Hg) mining district; these data include the location of the several Hg mineralizations, including their typology, size, mineralogy, and stratigraphic position, as well as other information associated to the metallogenetic model set up by Hern&amp;#225;ndez et al. (1999).&lt;/p&gt;&lt;p&gt;Later, few machine learning models are built to select the one which offers the best results. The aim of this work is twofold: on the one hand, it is intended to build a machine learning model capable of, given the geological features of a data point, to determine the mercury deposit to which it belongs. On the other hand, the aim is to build a machine learning model capable of, given the geological features of a data point, to identify the kind of deposit to which it belongs. The experiments conducted in this work have been properly designed, validating the results obtained using statistical techniques.&lt;/p&gt;&lt;p&gt;Finally, the models built in this work will allow to generate mercury prospectivity maps. The final aim of this process is to get and train a system able to perform antimony prospection in the nearby Guadalmez syncline.&lt;/p&gt;&lt;p&gt;This work was funded by the ANR (ANR-19-MIN2-0002-01), the AEI (MICIU/AEI/REF.: PCI2019-103779) and author&amp;#8217;s institutions in the framework of the ERA-MIN2 AUREOLE project.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Guerra Prado E.M.; de Souza Filho C.R.; Carranza E.M.; Motta J.G. (2020). Modeling of Cu-Au prospectivity in the Caraj&amp;#225;s mineral province (Brasil) through machine learning: Dealing with embalanced training data.&lt;/p&gt;&lt;p&gt;Harris, J.R.; Grunsky, E.; Corrigan, D. (2015). Data- and knowledge-driven mineral prospectivity maps for Canda&amp;#8217;s North.&lt;/p&gt;&lt;p&gt;Hern&amp;#225;ndez, A.; J&amp;#233;brak, M.; Higueras, P.; Oyarzun, R.; Morata, D.; Munh&amp;#225;, J. (1999). The Almad&amp;#233;n mercury mining district, Spain. Mineralium Deposita, 34: 539-548.&lt;/p&gt;&lt;p&gt;Sun, T.; Chen, F.; Zhong, L.; Liu, W.; Wang, Y. (2019). GIS-based mineral prospectivity mapping using machine learning methods: A case study from Tongling ore district, eastern China.&lt;/p&gt;


Author(s):  
S. Sasikala ◽  
S. J. Subhashini ◽  
P. Alli ◽  
J. Jane Rubel Angelina

Machine learning is a technique of parsing data, learning from that data, and then applying what has been learned to make informed decisions. Deep learning is actually a subset of machine learning. It technically is machine learning and functions in the same way, but it has different capabilities. The main difference between deep and machine learning is, machine learning models become well progressively, but the model still needs some guidance. If a machine learning model returns an inaccurate prediction, then the programmer needs to fix that problem explicitly, but in the case of deep learning, the model does it by itself. Automatic car driving system is a good example of deep learning. On other hand, Artificial Intelligence is a different thing from machine learning and deep learning. Deep learning and machine learning both are the subsets of AI.


2021 ◽  
Author(s):  
Cenk Temizel ◽  
Celal Hakan Canbaz ◽  
Karthik Balaji ◽  
Ahsen Ozesen ◽  
Kirill Yanidis ◽  
...  

Abstract Machine learning models have worked as a robust tool in forecasting and optimization processes for wells in conventional, data-rich reservoirs. In unconventional reservoirs however, given the large ranges of uncertainty, purely data-driven, machine learning models have not yet proven to be repeatable and scalable. In such cases, integrating physics-based reservoir simulation methods along with machine learning techniques can be used as a solution to alleviate these limitations. The objective of this study is to provide an overview along with examples of implementing this integrated approach for the purpose of forecasting Estimated Ultimate Recovery (EUR) in shale reservoirs. This study is solely based on synthetic data. To generate data for one section of a reservoir, a full-physics reservoir simulator has been used. Simulated data from this section is used to train a machine learning model, which provides EUR as the output. Production from another section of the field with a different range of reservoir properties is then forecasted using a physics-based model. Using the earlier trained model, production forecasting for this section of the reservoir is then carried out to illustrate the integrated approach to EUR forecasting for a section of the reservoir that is not data rich. The integrated approach, or hybrid modeling, production forecasting for different sections of the reservoir that were data-starved, are illustrated. Using the physics-based model, the uncertainty in EUR predictions made by the machine learning model has been reduced and a more accurate forecasting has been attained. This method is primarily applicable in reservoirs, such as unconventionals, where one section of the field that has been developed has a substantial amount of data, whereas, the other section of the field will be data starved. The hybrid model was consistently able to forecast EUR at an acceptable level of accuracy, thereby, highlighting the benefits of this type of an integrated approach. This study advances the application of repeatable and scalable hybrid models in unconventional reservoirs and highlights its benefits as compared to using either physics-based or machine-learning based models separately.


2020 ◽  
Author(s):  
Piyush Mathur ◽  
Tavpritesh Sethi ◽  
Anya Mathur ◽  
Kamal Maheshwari ◽  
Jacek B Cywinski ◽  
...  

AbstractBackgroundCOVID-19 is now one of the leading causes of mortality amongst adults in the United States for the year 2020. Multiple epidemiological models have been built, often based on limited data, to understand the spread and impact of the pandemic. However, many geographic and local factors may have played an important role in higher morbidity and mortality in certain populations.ObjectiveThe goal of this study was to develop machine learning models to understand the relative association of socioeconomic, demographic, travel, and health care characteristics of different states across the United States and COVID-19 mortality.MethodsUsing multiple public data sets, 24 variables linked to COVID-19 disease were chosen to build the models. Two independent machine learning models using CatBoost regression and random forest were developed. SHAP feature importance and a Boruta algorithm were used to elucidate the relative importance of features on COVID-19 mortality in the United States.ResultsFeature importances from both the categorical models, i.e., CatBoost and random forest consistently showed that a high population density, number of nursing homes, number of nursing home beds and foreign travel were strongest predictors of COVID-19 mortality. Percentage of African American amongst the population was also found to be of high importance in prediction of COVID-19 mortality whereas racial majority (primarily, Caucasian) was not. Both models fitted the data well with a training R2 of 0.99 and 0.88 respectively. The effect of median age,median income, climate and disease mitigation measures on COVID-19 related mortality remained unclear.ConclusionsCOVID-19 policy making will need to take population density, pre-existing medical care and state travel policies into account. Our models identified and quantified the relative importance of each of these for mortality predictions using machine learning.


2018 ◽  
Vol 15 (2) ◽  
pp. 107-121
Author(s):  
David Esteban Useche-Peláez ◽  
Daniel Orlando Díaz-López ◽  
Daniela Sepúlveda-Alzate ◽  
Diego Edison Cabuya-Padilla

Sandboxing has been used regularly to analyze software samples and determine if these contain suspicious properties or behaviors. Even if sandboxing is a powerful technique to perform malware analysis, it requires that a malware analyst performs a rigorous analysis of the results to determine the nature of the sample: goodware or malware. This paper proposes two machine learning models able to classify samples based on signatures and permissions obtained through Cuckoo sandbox, Androguard and VirusTotal. The developed models are also tested obtaining an acceptable percentage of correctly classified samples, being in this way useful tools for a malware analyst. A proposal of architecture for an IoT sentinel that uses one of the developed machine learning model is also showed. Finally, different approaches, perspectives, and challenges about the use of sandboxing and machine learning by security teams in State security agencies are also shared.


Sign in / Sign up

Export Citation Format

Share Document