scholarly journals Machine Learning Models for COVID-19 Confirmed Cases Prediction: A Meta-Analysis Approach

2021 ◽  
Vol 2084 (1) ◽  
pp. 012013
Author(s):  
Wan Fairos Wan Yaacob ◽  
Norafefah Mohamad Sobri ◽  
Syerina Azlin Md Nasir ◽  
Noor Ilanie Nordin ◽  
Wan Faizah Wan Yaacob ◽  
...  

Abstract COVID-19, CoronaVirus Disease – 2019, belongs to the genus of Coronaviridae. COVID-19 is no longer pandemic but rather endemic with the number of deaths around the world of more than 3,166,516 cases. This reality has placed a massive burden on limited healthcare systems. Thus, many researchers try to develop a prediction model to further understand this phenomenon. One of the recent methods used is machine learning models that learn from the historical data and make predictions about the events. These data mining techniques have been used to predict the number of confirmed cases of COVID-19. This paper investigated the variability of the effect size on the correlation performance of machine learning models in predicting confirmed cases of COVID-19 using meta-analysis. It explored the correlation between actual and predicted COVID-19 cases from different Neural Network machine learning models by means of estimated variance, chi-square heterogeneity (Q), heterogeneity index (I2) and random effect model. The results gave a good summary effect of 95% confidence interval. Based on chi-square heterogeneity (Q) and heterogeneity index (I2), it was found that the correlations were heterogeneous among the studies. The 95% confidence interval of effect summary also supported the difference in correlation between actual and predicted number of confirmed COVID-19 cases among the studies. There was no evidence of publication bias based on funnel plot and Egger and Begg’s test. Hence, findings from this study provide evidence of good prediction performance from the Neural Network model based on a combination of studies that can later serve in the prediction of COVID-19 confirmed cases.

2021 ◽  
Author(s):  
Mohammed Ayub ◽  
SanLinn Kaka

Abstract Manual first-break picking from a large volume of seismic data is extremely tedious and costly. Deployment of machine learning models makes the process fast and cost effective. However, these machine learning models require high representative and effective features for accurate automatic picking. Therefore, First- Break (FB) picking classification model that uses effective minimum number of features and promises performance efficiency is proposed. The variants of Recurrent Neural Networks (RNNs) such as Long ShortTerm Memory (LSTM) and Gated Recurrent Unit (GRU) can retain contextual information from long previous time steps. We deploy this advantage for FB picking as seismic traces are amplitude values of vibration along the time-axis. We use behavioral fluctuation of amplitude as input features for LSTM and GRU. The models are trained on noisy data and tested for generalization on original traces not seen during the training and validation process. In order to analyze the real-time suitability, the performance is benchmarked using accuracy, F1-measure and three other established metrics. We have trained two RNN models and two deep Neural Network models for FB classification using only amplitude values as features. Both LSTM and GRU have the accuracy and F1-measure with a score of 94.20%. With the same features, Convolutional Neural Network (CNN) has an accuracy of 93.58% and F1-score of 93.63%. Again, Deep Neural Network (DNN) model has scores of 92.83% and 92.59% as accuracy and F1-measure, respectively. From the pexperiment results, we see significant superior performance of LSTM and GRU to CNN and DNN when used the same features. For robustness of LSTM and GRU models, the performance is compared with DNN model that is trained using nine features derived from seismic traces and observed that the performance superiority of RNN models. Therefore, it is safe to conclude that RNN models (LSTM and GRU) are capable of classifying the FB events efficiently even by using a minimum number of features that are not computationally expensive. The novelty of our work is the capability of automatic FB classification with the RNN models that incorporate contextual behavioral information without the need for sophisticated feature extraction or engineering techniques that in turn can help in reducing the cost and fostering classification model robust and faster.


2020 ◽  
Vol 36 (3) ◽  
pp. 1166-1187 ◽  
Author(s):  
Shohei Naito ◽  
Hiromitsu Tomozawa ◽  
Yuji Mori ◽  
Takeshi Nagata ◽  
Naokazu Monma ◽  
...  

This article presents a method for detecting damaged buildings in the event of an earthquake using machine learning models and aerial photographs. We initially created training data for machine learning models using aerial photographs captured around the town of Mashiki immediately after the main shock of the 2016 Kumamoto earthquake. All buildings are classified into one of the four damage levels by visual interpretation. Subsequently, two damage discrimination models are developed: a bag-of-visual-words model and a model based on a convolutional neural network. Results are compared and validated in terms of accuracy, revealing that the latter model is preferable. Moreover, for the convolutional neural network model, the target areas are expanded and the recalls of damage classification at the four levels range approximately from 66% to 81%.


2018 ◽  
Vol 8 (12) ◽  
pp. 2663 ◽  
Author(s):  
Davy Preuveneers ◽  
Vera Rimmer ◽  
Ilias Tsingenopoulos ◽  
Jan Spooren ◽  
Wouter Joosen ◽  
...  

The adoption of machine learning and deep learning is on the rise in the cybersecurity domain where these AI methods help strengthen traditional system monitoring and threat detection solutions. However, adversaries too are becoming more effective in concealing malicious behavior amongst large amounts of benign behavior data. To address the increasing time-to-detection of these stealthy attacks, interconnected and federated learning systems can improve the detection of malicious behavior by joining forces and pooling together monitoring data. The major challenge that we address in this work is that in a federated learning setup, an adversary has many more opportunities to poison one of the local machine learning models with malicious training samples, thereby influencing the outcome of the federated learning and evading detection. We present a solution where contributing parties in federated learning can be held accountable and have their model updates audited. We describe a permissioned blockchain-based federated learning method where incremental updates to an anomaly detection machine learning model are chained together on the distributed ledger. By integrating federated learning with blockchain technology, our solution supports the auditing of machine learning models without the necessity to centralize the training data. Experiments with a realistic intrusion detection use case and an autoencoder for anomaly detection illustrate that the increased complexity caused by blockchain technology has a limited performance impact on the federated learning, varying between 5 and 15%, while providing full transparency over the distributed training process of the neural network. Furthermore, our blockchain-based federated learning solution can be generalized and applied to more sophisticated neural network architectures and other use cases.


2020 ◽  
Vol 32 ◽  
pp. 03005
Author(s):  
Rahul Awhad ◽  
Saurabh Jayswal ◽  
Adesh More ◽  
Jyoti Kundale

Due to the growing advancements in technology, many software applications are being developed to modify and edit images. Such software can be used to alter images. Nowadays, an altered image is so realistic that it becomes too difficult for a person to identify whether the image is fake or real. Such software applications can be used to alter the image of a person’s face also. So, it becomes very difficult to identify whether the image of the face is real or not. Our proposed system is used to identify whether the image of a face is fake or real. The proposed system makes use of machine learning. The system makes use of a convolution neural network and support vector classifier. Both these machine learning models are trained using real as well as fake images. Both these trained models will take an image as an input and will determine whether the image is fake or real.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257069
Author(s):  
Jae-Geum Shim ◽  
Kyoung-Ho Ryu ◽  
Sung Hyun Lee ◽  
Eun-Ah Cho ◽  
Sungho Lee ◽  
...  

Objective To construct a prediction model for optimal tracheal tube depth in pediatric patients using machine learning. Methods Pediatric patients aged <7 years who received post-operative ventilation after undergoing surgery between January 2015 and December 2018 were investigated in this retrospective study. The optimal location of the tracheal tube was defined as the median of the distance between the upper margin of the first thoracic(T1) vertebral body and the lower margin of the third thoracic(T3) vertebral body. We applied four machine learning models: random forest, elastic net, support vector machine, and artificial neural network and compared their prediction accuracy to three formula-based methods, which were based on age, height, and tracheal tube internal diameter(ID). Results For each method, the percentage with optimal tracheal tube depth predictions in the test set was calculated as follows: 79.0 (95% confidence interval [CI], 73.5 to 83.6) for random forest, 77.4 (95% CI, 71.8 to 82.2; P = 0.719) for elastic net, 77.0 (95% CI, 71.4 to 81.8; P = 0.486) for support vector machine, 76.6 (95% CI, 71.0 to 81.5; P = 1.0) for artificial neural network, 66.9 (95% CI, 60.9 to 72.5; P < 0.001) for the age-based formula, 58.5 (95% CI, 52.3 to 64.4; P< 0.001) for the tube ID-based formula, and 44.4 (95% CI, 38.3 to 50.6; P < 0.001) for the height-based formula. Conclusions In this study, the machine learning models predicted the optimal tracheal tube tip location for pediatric patients more accurately than the formula-based methods. Machine learning models using biometric variables may help clinicians make decisions regarding optimal tracheal tube depth in pediatric patients.


BMJ ◽  
2020 ◽  
pp. m3919
Author(s):  
Yan Li ◽  
Matthew Sperrin ◽  
Darren M Ashcroft ◽  
Tjeerd Pieter van Staa

AbstractObjectiveTo assess the consistency of machine learning and statistical techniques in predicting individual level and population level risks of cardiovascular disease and the effects of censoring on risk predictions.DesignLongitudinal cohort study from 1 January 1998 to 31 December 2018.Setting and participants3.6 million patients from the Clinical Practice Research Datalink registered at 391 general practices in England with linked hospital admission and mortality records.Main outcome measuresModel performance including discrimination, calibration, and consistency of individual risk prediction for the same patients among models with comparable model performance. 19 different prediction techniques were applied, including 12 families of machine learning models (grid searched for best models), three Cox proportional hazards models (local fitted, QRISK3, and Framingham), three parametric survival models, and one logistic model.ResultsThe various models had similar population level performance (C statistics of about 0.87 and similar calibration). However, the predictions for individual risks of cardiovascular disease varied widely between and within different types of machine learning and statistical models, especially in patients with higher risks. A patient with a risk of 9.5-10.5% predicted by QRISK3 had a risk of 2.9-9.2% in a random forest and 2.4-7.2% in a neural network. The differences in predicted risks between QRISK3 and a neural network ranged between –23.2% and 0.1% (95% range). Models that ignored censoring (that is, assumed censored patients to be event free) substantially underestimated risk of cardiovascular disease. Of the 223 815 patients with a cardiovascular disease risk above 7.5% with QRISK3, 57.8% would be reclassified below 7.5% when using another model.ConclusionsA variety of models predicted risks for the same patients very differently despite similar model performances. The logistic models and commonly used machine learning models should not be directly applied to the prediction of long term risks without considering censoring. Survival models that consider censoring and that are explainable, such as QRISK3, are preferable. The level of consistency within and between models should be routinely assessed before they are used for clinical decision making.


Sign in / Sign up

Export Citation Format

Share Document