TOPICAL ISSUES OF APPLICATION OF MACHINE LEARNING METHODS IN ECONOMY

Author(s):  
Natalia Pavlovna Persteneva ◽  
◽  
Darya Dmitrievn Skryleva ◽  

The article discusses machine learning methods. Using the example of two popular classes: supervised learning and unsupervised learning. Variants of the main types of machine learning models for each method are presented. A generalized algorithm for building any machine learning model is formed.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Nasser Assery ◽  
Yuan (Dorothy) Xiaohong ◽  
Qu Xiuli ◽  
Roy Kaushik ◽  
Sultan Almalki

Purpose This study aims to propose an unsupervised learning model to evaluate the credibility of disaster-related Twitter data and present a performance comparison with commonly used supervised machine learning models. Design/methodology/approach First historical tweets on two recent hurricane events are collected via Twitter API. Then a credibility scoring system is implemented in which the tweet features are analyzed to give a credibility score and credibility label to the tweet. After that, supervised machine learning classification is implemented using various classification algorithms and their performances are compared. Findings The proposed unsupervised learning model could enhance the emergency response by providing a fast way to determine the credibility of disaster-related tweets. Additionally, the comparison of the supervised classification models reveals that the Random Forest classifier performs significantly better than the SVM and Logistic Regression classifiers in classifying the credibility of disaster-related tweets. Originality/value In this paper, an unsupervised 10-point scoring model is proposed to evaluate the tweets’ credibility based on the user-based and content-based features. This technique could be used to evaluate the credibility of disaster-related tweets on future hurricanes and would have the potential to enhance emergency response during critical events. The comparative study of different supervised learning methods has revealed effective supervised learning methods for evaluating the credibility of Tweeter data.


Data is the most crucial component of a successful ML system. Once a machine learning model is developed, it gets obsolete over time due to presence of new input data being generated every second. In order to keep our predictions accurate we need to find a way to keep our models up to date. Our research work involves finding a mechanism which can retrain the model with new data automatically. This research also involves exploring the possibilities of automating machine learning processes. We started this project by training and testing our model using conventional machine learning methods. The outcome was then compared with the outcome of those experiments conducted using the AutoML methods like TPOT. This helped us in finding an efficient technique to retrain our models. These techniques can be used in areas where people do not deal with the actual working of a ML model but only require the outputs of ML processes


2016 ◽  
Vol 7 (2) ◽  
pp. 43-71 ◽  
Author(s):  
Sangeeta Lal ◽  
Neetu Sardana ◽  
Ashish Sureka

Logging is an important yet tough decision for OSS developers. Machine-learning models are useful in improving several steps of OSS development, including logging. Several recent studies propose machine-learning models to predict logged code construct. The prediction performances of these models are limited due to the class-imbalance problem since the number of logged code constructs is small as compared to non-logged code constructs. No previous study analyzes the class-imbalance problem for logged code construct prediction. The authors first analyze the performances of J48, RF, and SVM classifiers for catch-blocks and if-blocks logged code constructs prediction on imbalanced datasets. Second, the authors propose LogIm, an ensemble and threshold-based machine-learning model. Third, the authors evaluate the performance of LogIm on three open-source projects. On average, LogIm model improves the performance of baseline classifiers, J48, RF, and SVM, by 7.38%, 9.24%, and 4.6% for catch-blocks, and 12.11%, 14.95%, and 19.13% for if-blocks logging prediction.


Author(s):  
Terazima Maeda

Nowadays, there is a large number of machine learning models that could be used for various areas. However, different research targets are usually sensitive to the type of models. For a specific prediction target, the predictive accuracy of a machine learning model is always dependent to the data feature, data size and the intrinsic relationship between inputs and outputs. Therefore, for a specific data group and a fixed prediction mission, how to rationally compare the predictive accuracy of different machine learning model is a big question. In this brief note, we show how should we compare the performances of different machine models by raising some typical examples.


2020 ◽  
Vol 9 (3) ◽  
pp. 875
Author(s):  
Young Suk Kwon ◽  
Moon Seong Baek

The quick sepsis-related organ failure assessment (qSOFA) score has been introduced to predict the likelihood of organ dysfunction in patients with suspected infection. We hypothesized that machine-learning models using qSOFA variables for predicting three-day mortality would provide better accuracy than the qSOFA score in the emergency department (ED). Between January 2016 and December 2018, the medical records of patients aged over 18 years with suspected infection were retrospectively obtained from four EDs in Korea. Data from three hospitals (n = 19,353) were used as training-validation datasets and data from one (n = 4234) as the test dataset. Machine-learning algorithms including extreme gradient boosting, light gradient boosting machine, and random forest were used. We assessed the prediction ability of machine-learning models using the area under the receiver operating characteristic (AUROC) curve, and DeLong’s test was used to compare AUROCs between the qSOFA scores and qSOFA-based machine-learning models. A total of 447,926 patients visited EDs during the study period. We analyzed 23,587 patients with suspected infection who were admitted to the EDs. The median age of the patients was 63 years (interquartile range: 43–78 years) and in-hospital mortality was 4.0% (n = 941). For predicting three-day mortality among patients with suspected infection in the ED, the AUROC of the qSOFA-based machine-learning model (0.86 [95% CI 0.85–0.87]) for three -day mortality was higher than that of the qSOFA scores (0.78 [95% CI 0.77–0.79], p < 0.001). For predicting three-day mortality in patients with suspected infection in the ED, the qSOFA-based machine-learning model was found to be superior to the conventional qSOFA scores.


2019 ◽  
pp. 1-11 ◽  
Author(s):  
David Chen ◽  
Gaurav Goyal ◽  
Ronald S. Go ◽  
Sameer A. Parikh ◽  
Che G. Ngufor

PURPOSE Time to event is an important aspect of clinical decision making. This is particularly true when diseases have highly heterogeneous presentations and prognoses, as in chronic lymphocytic lymphoma (CLL). Although machine learning methods can readily learn complex nonlinear relationships, many methods are criticized as inadequate because of limited interpretability. We propose using unsupervised clustering of the continuous output of machine learning models to provide discrete risk stratification for predicting time to first treatment in a cohort of patients with CLL. PATIENTS AND METHODS A total of 737 treatment-naïve patients with CLL diagnosed at Mayo Clinic were included in this study. We compared predictive abilities for two survival models (Cox proportional hazards and random survival forest) and four classification methods (logistic regression, support vector machines, random forest, and gradient boosting machine). Probability of treatment was then stratified. RESULTS Machine learning methods did not yield significantly more accurate predictions of time to first treatment. However, automated risk stratification provided by clustering was able to better differentiate patients who were at risk for treatment within 1 year than models developed using standard survival analysis techniques. CONCLUSION Clustering the posterior probabilities of machine learning models provides a way to better interpret machine learning models.


2021 ◽  
Author(s):  
Julio Alberto López-Gómez ◽  
Daniel Carrasco Pardo ◽  
Pablo Higueras ◽  
Jose María Esbrí ◽  
Saturnino Lorenzo

&lt;p&gt;Traditionally, prospectivity models were designed using approaches mainly based on expert judgement. These models have been widely applied and they are also known as knowledge-driven prospectivity models (see Harris et al. (2015)). Currently, artificial intelligence approaches, especially machine learning models, are being applied to build prospectivity models since they have been proven to be successful in many other domains (see Sun et al., 2019 and Guerra Prado et al., 2020). They are also known as data-driven prospectivity models. Machine learning models allow to learn from data repositories in order to extract and detect relationships from the data to predict new instances.&lt;/p&gt;&lt;p&gt;In this work, a geological dataset was collected by a team of expert geologists. The data collected includes the geographical coordinates as well as several geological features of points belonged to seventy-seven different mercury deposits in the Almad&amp;#233;n mining district. The resulting dataset is composed by a total of 24798 points and 24 attributes for each point. In particular, we have collected geological and mining-related data regarding the Almad&amp;#233;n mercury (Hg) mining district; these data include the location of the several Hg mineralizations, including their typology, size, mineralogy, and stratigraphic position, as well as other information associated to the metallogenetic model set up by Hern&amp;#225;ndez et al. (1999).&lt;/p&gt;&lt;p&gt;Later, few machine learning models are built to select the one which offers the best results. The aim of this work is twofold: on the one hand, it is intended to build a machine learning model capable of, given the geological features of a data point, to determine the mercury deposit to which it belongs. On the other hand, the aim is to build a machine learning model capable of, given the geological features of a data point, to identify the kind of deposit to which it belongs. The experiments conducted in this work have been properly designed, validating the results obtained using statistical techniques.&lt;/p&gt;&lt;p&gt;Finally, the models built in this work will allow to generate mercury prospectivity maps. The final aim of this process is to get and train a system able to perform antimony prospection in the nearby Guadalmez syncline.&lt;/p&gt;&lt;p&gt;This work was funded by the ANR (ANR-19-MIN2-0002-01), the AEI (MICIU/AEI/REF.: PCI2019-103779) and author&amp;#8217;s institutions in the framework of the ERA-MIN2 AUREOLE project.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Guerra Prado E.M.; de Souza Filho C.R.; Carranza E.M.; Motta J.G. (2020). Modeling of Cu-Au prospectivity in the Caraj&amp;#225;s mineral province (Brasil) through machine learning: Dealing with embalanced training data.&lt;/p&gt;&lt;p&gt;Harris, J.R.; Grunsky, E.; Corrigan, D. (2015). Data- and knowledge-driven mineral prospectivity maps for Canda&amp;#8217;s North.&lt;/p&gt;&lt;p&gt;Hern&amp;#225;ndez, A.; J&amp;#233;brak, M.; Higueras, P.; Oyarzun, R.; Morata, D.; Munh&amp;#225;, J. (1999). The Almad&amp;#233;n mercury mining district, Spain. Mineralium Deposita, 34: 539-548.&lt;/p&gt;&lt;p&gt;Sun, T.; Chen, F.; Zhong, L.; Liu, W.; Wang, Y. (2019). GIS-based mineral prospectivity mapping using machine learning methods: A case study from Tongling ore district, eastern China.&lt;/p&gt;


Author(s):  
S. Sasikala ◽  
S. J. Subhashini ◽  
P. Alli ◽  
J. Jane Rubel Angelina

Machine learning is a technique of parsing data, learning from that data, and then applying what has been learned to make informed decisions. Deep learning is actually a subset of machine learning. It technically is machine learning and functions in the same way, but it has different capabilities. The main difference between deep and machine learning is, machine learning models become well progressively, but the model still needs some guidance. If a machine learning model returns an inaccurate prediction, then the programmer needs to fix that problem explicitly, but in the case of deep learning, the model does it by itself. Automatic car driving system is a good example of deep learning. On other hand, Artificial Intelligence is a different thing from machine learning and deep learning. Deep learning and machine learning both are the subsets of AI.


2021 ◽  
Author(s):  
Cenk Temizel ◽  
Celal Hakan Canbaz ◽  
Karthik Balaji ◽  
Ahsen Ozesen ◽  
Kirill Yanidis ◽  
...  

Abstract Machine learning models have worked as a robust tool in forecasting and optimization processes for wells in conventional, data-rich reservoirs. In unconventional reservoirs however, given the large ranges of uncertainty, purely data-driven, machine learning models have not yet proven to be repeatable and scalable. In such cases, integrating physics-based reservoir simulation methods along with machine learning techniques can be used as a solution to alleviate these limitations. The objective of this study is to provide an overview along with examples of implementing this integrated approach for the purpose of forecasting Estimated Ultimate Recovery (EUR) in shale reservoirs. This study is solely based on synthetic data. To generate data for one section of a reservoir, a full-physics reservoir simulator has been used. Simulated data from this section is used to train a machine learning model, which provides EUR as the output. Production from another section of the field with a different range of reservoir properties is then forecasted using a physics-based model. Using the earlier trained model, production forecasting for this section of the reservoir is then carried out to illustrate the integrated approach to EUR forecasting for a section of the reservoir that is not data rich. The integrated approach, or hybrid modeling, production forecasting for different sections of the reservoir that were data-starved, are illustrated. Using the physics-based model, the uncertainty in EUR predictions made by the machine learning model has been reduced and a more accurate forecasting has been attained. This method is primarily applicable in reservoirs, such as unconventionals, where one section of the field that has been developed has a substantial amount of data, whereas, the other section of the field will be data starved. The hybrid model was consistently able to forecast EUR at an acceptable level of accuracy, thereby, highlighting the benefits of this type of an integrated approach. This study advances the application of repeatable and scalable hybrid models in unconventional reservoirs and highlights its benefits as compared to using either physics-based or machine-learning based models separately.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
G Sng ◽  
D Y Z Lim ◽  
C H Sia ◽  
J S W Lee ◽  
X Y Shen ◽  
...  

Abstract Background/Introduction Classic electrocardiographic (ECG) criteria for left ventricular hypertrophy (LVH) have been well studied in Western populations, particularly in hypertensive patients. However, their utility in Asian populations is not well studied, and their applicability to young pre-participation cohorts is unclear. We sought to evaluate the performance of classical criteria against that of machine learning models. Aims We sought to evaluate the performance of classical criteria against the performance of novel machine learning models in the identification of LVH. Methodology Between November 2009 and December 2014, pre-participation screening ECG and subsequent echocardiographic data was collected from 13,954 males aged 16 to 22, who reported for medical screening prior to military conscription. Final diagnosis of LVH was made on echocardiography, with LVH defined as a left ventricular mass index >115g/m2. The continuous and binary forms of classical criteria were compared against machine learning models using receiver-operating characteristics (ROC) curve analysis. An 80:20 split was used to divide the data into training and test sets for the machine learning models, and three fold cross validation was used in training the models. We also compared the important variables identified by machine learning models with the input variables of classical criteria. Results Prevalence of echocardiographic LVH in this population was 0.91% (127 cases). Classical ECG criteria had poor performance in predicting LVH, with the best predictions achieved by the continuous Sokolow-Lyon (AUC = 0.63, 95% CI = 0.58–0.68) and the continuous Modified Cornell (AUC = 0.63, 95% CI = 0.58–0.68). Machine learning methods achieved superior performance – Random Forest (AUC = 0.74, 95% CI = 0.66–0.82), Gradient Boosting Machines (AUC = 0.70, 95% CI = 0.61–0.79), GLMNet (AUC = 0.78, 95% CI = 0.70–0.86). Novel and less recognized ECG parameters identified by the machine learning models as being predictive of LVH included mean QT interval, mean QRS interval, R in V4, and R in I. ROC curves of models studies Conclusion The prevalence of LVH in our population is lower than that previously reported in other similar populations. Classical ECG criteria perform poorly in this context. Machine learning methods show superior predictive performance and demonstrate non-traditional predictors of LVH from ECG data. Further research is required to improve the predictive ability of machine learning models, and to understand the underlying pathology of the novel ECG predictors identified.


Sign in / Sign up

Export Citation Format

Share Document