scholarly journals Can artificial intelligence predict the need for oxygen therapy in early stage COVID-19 pneumonia?

2020 ◽  
Author(s):  
Hirofumi Obinata ◽  
Peiying Ruan ◽  
Hitoshi Mori ◽  
Wentao Zhu ◽  
Hisashi Sasaki ◽  
...  

Abstract This study investigated the utility of artificial intelligence in predicting disease progression. We analysed 194 patients with COVID-19 confirmed by reverse transcription polymerase chain reaction. Among them, 31 patients had oxygen therapy administered after admission. To assess the utility of artificial intelligence in the prediction of disease progression, we used three machine learning models employing clinical features (patient’s background, laboratory data, and symptoms), one deep learning model employing computed tomography (CT) images, and one multimodal deep learning model employing a combination of clinical features and CT images. We also evaluated the predictive values of these models and analysed the important features required to predict worsening in cases of COVID-19. The multimodal deep learning model had the highest accuracy. The CT image was an important feature of multimodal deep learning model. The area under the curve of all machine learning models employing clinical features and the deep learning model employing CT images exceeded 90%, and sensitivity of these models exceeded 95%. C-reactive protein and lactate dehydrogenase were important features of machine learning models. Our machine learning model, while slightly less accurate than the multimodal model, still provides a valuable medical triage tool for patients in the early stages of COVID-19.

2019 ◽  
Author(s):  
Mojtaba Haghighatlari ◽  
Gaurav Vishwakarma ◽  
Mohammad Atif Faiz Afzal ◽  
Johannes Hachmann

<div><div><div><p>We present a multitask, physics-infused deep learning model to accurately and efficiently predict refractive indices (RIs) of organic molecules, and we apply it to a library of 1.5 million compounds. We show that it outperforms earlier machine learning models by a significant margin, and that incorporating known physics into data-derived models provides valuable guardrails. Using a transfer learning approach, we augment the model to reproduce results consistent with higher-level computational chemistry training data, but with a considerably reduced number of corresponding calculations. Prediction errors of machine learning models are typically smallest for commonly observed target property values, consistent with the distribution of the training data. However, since our goal is to identify candidates with unusually large RI values, we propose a strategy to boost the performance of our model in the remoter areas of the RI distribution: We bias the model with respect to the under-represented classes of molecules that have values in the high-RI regime. By adopting a metric popular in web search engines, we evaluate our effectiveness in ranking top candidates. We confirm that the models developed in this study can reliably predict the RIs of the top 1,000 compounds, and are thus able to capture their ranking. We believe that this is the first study to develop a data-derived model that ensures the reliability of RI predictions by model augmentation in the extrapolation region on such a large scale. These results underscore the tremendous potential of machine learning in facilitating molecular (hyper)screening approaches on a massive scale and in accelerating the discovery of new compounds and materials, such as organic molecules with high-RI for applications in opto-electronics.</p></div></div></div>


Author(s):  
S. Sasikala ◽  
S. J. Subhashini ◽  
P. Alli ◽  
J. Jane Rubel Angelina

Machine learning is a technique of parsing data, learning from that data, and then applying what has been learned to make informed decisions. Deep learning is actually a subset of machine learning. It technically is machine learning and functions in the same way, but it has different capabilities. The main difference between deep and machine learning is, machine learning models become well progressively, but the model still needs some guidance. If a machine learning model returns an inaccurate prediction, then the programmer needs to fix that problem explicitly, but in the case of deep learning, the model does it by itself. Automatic car driving system is a good example of deep learning. On other hand, Artificial Intelligence is a different thing from machine learning and deep learning. Deep learning and machine learning both are the subsets of AI.


2019 ◽  
Author(s):  
Mojtaba Haghighatlari ◽  
Gaurav Vishwakarma ◽  
Mohammad Atif Faiz Afzal ◽  
Johannes Hachmann

<div><div><div><p>We present a multitask, physics-infused deep learning model to accurately and efficiently predict refractive indices (RIs) of organic molecules, and we apply it to a library of 1.5 million compounds. We show that it outperforms earlier machine learning models by a significant margin, and that incorporating known physics into data-derived models provides valuable guardrails. Using a transfer learning approach, we augment the model to reproduce results consistent with higher-level computational chemistry training data, but with a considerably reduced number of corresponding calculations. Prediction errors of machine learning models are typically smallest for commonly observed target property values, consistent with the distribution of the training data. However, since our goal is to identify candidates with unusually large RI values, we propose a strategy to boost the performance of our model in the remoter areas of the RI distribution: We bias the model with respect to the under-represented classes of molecules that have values in the high-RI regime. By adopting a metric popular in web search engines, we evaluate our effectiveness in ranking top candidates. We confirm that the models developed in this study can reliably predict the RIs of the top 1,000 compounds, and are thus able to capture their ranking. We believe that this is the first study to develop a data-derived model that ensures the reliability of RI predictions by model augmentation in the extrapolation region on such a large scale. These results underscore the tremendous potential of machine learning in facilitating molecular (hyper)screening approaches on a massive scale and in accelerating the discovery of new compounds and materials, such as organic molecules with high-RI for applications in opto-electronics.</p></div></div></div>


2020 ◽  
Author(s):  
Zakhriya Alhassan ◽  
MATTHEW WATSON ◽  
David Budgen ◽  
Riyad Alshammari ◽  
Ali Alessan ◽  
...  

BACKGROUND Predicting the risk of glycated hemoglobin (HbA1c) elevation can help identify patients with the potential for developing serious chronic health problems such as diabetes and cardiovascular diseases. Early preventive interventions based upon advanced predictive models using electronic health records (EHR) data for such patients can ultimately help provide better health outcomes. OBJECTIVE Our study investigates the performance of predictive models to forecast HbA1c elevation levels by employing machine learning approaches using data from current and previous visits in the EHR systems for patients who had not been previously diagnosed with any type of diabetes. METHODS This study employed one statistical model and three commonly used conventional machine learning models, as well as a deep learning model, to predict patients’ current levels of HbA1c. For the deep learning model, we also integrated current visit data with historical (longitudinal) data from previous visits. Explainable machine learning methods were used to interrogate the models and have an understanding of the reasons behind the models' decisions. All models were trained and tested using a large and naturally balanced dataset from Saudi Arabia with 18,844 unique patient records. RESULTS The machine learning models achieved the best results for predicting current HbA1c elevation risk. The deep learning model outperformed the statistical and conventional machine learning models with respect to all reported measures when employing time-series data. The best performing model was the multi-layer perceptron (MLP) which achieved an accuracy of 74.52% when used with historical data. CONCLUSIONS This study shows that machine learning models can provide promising results for the task of predicting current HbA1c levels. For deep learning in particular, utilizing the patient's longitudinal time-series data improved the performance and affected the relative importance for the predictors used. The models showed robust results that were consistent with comparable studies.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Bader Alouffi ◽  
Abdullah Alharbi ◽  
Radhya Sahal ◽  
Hager Saleh

Fake news is challenging to detect due to mixing accurate and inaccurate information from reliable and unreliable sources. Social media is a data source that is not trustworthy all the time, especially in the COVID-19 outbreak. During the COVID-19 epidemic, fake news is widely spread. The best way to deal with this is early detection. Accordingly, in this work, we have proposed a hybrid deep learning model that uses convolutional neural network (CNN) and long short-term memory (LSTM) to detect COVID-19 fake news. The proposed model consists of some layers: an embedding layer, a convolutional layer, a pooling layer, an LSTM layer, a flatten layer, a dense layer, and an output layer. For experimental results, three COVID-19 fake news datasets are used to evaluate six machine learning models, two deep learning models, and our proposed model. The machine learning models are DT, KNN, LR, RF, SVM, and NB, while the deep learning models are CNN and LSTM. Also, four matrices are used to validate the results: accuracy, precision, recall, and F1-measure. The conducted experiments show that the proposed model outperforms the six machine learning models and the two deep learning models. Consequently, the proposed system is capable of detecting the fake news of COVID-19 significantly.


2021 ◽  
Author(s):  
Yew Kee Wong

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

&lt;p&gt;Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model&amp;#8217;s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.&lt;/p&gt;


2021 ◽  
Vol 6 (22) ◽  
pp. 36-50
Author(s):  
Ali Hassan ◽  
Riza Sulaiman ◽  
Mansoor Abdullateef Abdulgabber ◽  
Hasan Kahtan

Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.


Healthcare ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 522
Author(s):  
Yassir Edrees Almalki ◽  
Abdul Qayyum ◽  
Muhammad Irfan ◽  
Noman Haider ◽  
Adam Glowacz ◽  
...  

The Coronavirus disease 2019 (COVID-19) is an infectious disease spreading rapidly and uncontrollably throughout the world. The critical challenge is the rapid detection of Coronavirus infected people. The available techniques being utilized are body-temperature measurement, along with anterior nasal swab analysis. However, taking nasal swabs and lab testing are complex, intrusive, and require many resources. Furthermore, the lack of test kits to meet the exceeding cases is also a major limitation. The current challenge is to develop some technology to non-intrusively detect the suspected Coronavirus patients through Artificial Intelligence (AI) techniques such as deep learning (DL). Another challenge to conduct the research on this area is the difficulty of obtaining the dataset due to a limited number of patients giving their consent to participate in the research study. Looking at the efficacy of AI in healthcare systems, it is a great challenge for the researchers to develop an AI algorithm that can help health professionals and government officials automatically identify and isolate people with Coronavirus symptoms. Hence, this paper proposes a novel method CoVIRNet (COVID Inception-ResNet model), which utilizes the chest X-rays to diagnose the COVID-19 patients automatically. The proposed algorithm has different inception residual blocks that cater to information by using different depths feature maps at different scales, with the various layers. The features are concatenated at each proposed classification block, using the average-pooling layer, and concatenated features are passed to the fully connected layer. The efficient proposed deep-learning blocks used different regularization techniques to minimize the overfitting due to the small COVID-19 dataset. The multiscale features are extracted at different levels of the proposed deep-learning model and then embedded into various machine-learning models to validate the combination of deep-learning and machine-learning models. The proposed CoVIR-Net model achieved 95.7% accuracy, and the CoVIR-Net feature extractor with random-forest classifier produced 97.29% accuracy, which is the highest, as compared to existing state-of-the-art deep-learning methods. The proposed model would be an automatic solution for the assessment and classification of COVID-19. We predict that the proposed method will demonstrate an outstanding performance as compared to the state-of-the-art techniques being used currently.


2021 ◽  
Author(s):  
Liyang Wang ◽  
Dantong Niu ◽  
Xinjie Zhao ◽  
Xiaoya Wang ◽  
Mengzhen Hao ◽  
...  

AbstractTraditional food allergen identification mainly relies on in vivo and in vitro experiments, which often needs a long period and high cost. The artificial intelligence (AI)-driven rapid food allergen identification method has solved the two drawbacks and is becoming an efficient auxiliary tool. Aiming to overcome the limitations of lower accuracy of traditional machine learning models in predicting the allergenicity of food allergens, this work proposed to introduce transformer deep learning model with self-attention mechanism and ensemble learning model (representative as Light Gradient Boosting Machine (LightGBM) and eXtreme Gradient Boosting (XGBoost)) to solve the problem. In order to highlight the superiority of the proposed novel method, the study also selected various commonly used machine learning models as the baseline classifiers. The results of 5-fold cross-validation found that the AUC of the deep model was the highest (0.9400), which was better than the ensemble learning and baseline algorithms. But it needed to be pre-trained, and the training cost was highest. By comparing the characteristics of transformer model and boosting models, it can be analyzed that the two types of models have their own advantages, which provides novel clues and inspiration for the rapid prediction of food allergens in the future.


Sign in / Sign up

Export Citation Format

Share Document