scholarly journals Artificial intelligence method to classify ophthalmic emergency severity based on symptoms: a validation study

BMJ Open ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. e037161
Author(s):  
Hyunmin Ahn

ObjectivesWe investigated the usefulness of machine learning artificial intelligence (AI) in classifying the severity of ophthalmic emergency for timely hospital visits.Study designThis retrospective study analysed the patients who first visited the Armed Forces Daegu Hospital between May and December 2019. General patient information, events and symptoms were input variables. Events, symptoms, diagnoses and treatments were output variables. The output variables were classified into four classes (red, orange, yellow and green, indicating immediate to no emergency cases). About 200 cases of the class-balanced validation data set were randomly selected before all training procedures. An ensemble AI model using combinations of fully connected neural networks with the synthetic minority oversampling technique algorithm was adopted.ParticipantsA total of 1681 patients were included.Major outcomesModel performance was evaluated using accuracy, precision, recall and F1 scores.ResultsThe accuracy of the model was 99.05%. The precision of each class (red, orange, yellow and green) was 100%, 98.10%, 92.73% and 100%. The recalls of each class were 100%, 100%, 98.08% and 95.33%. The F1 scores of each class were 100%, 99.04%, 95.33% and 96.00%.ConclusionsWe provided support for an AI method to classify ophthalmic emergency severity based on symptoms.

Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Pakinam Aboutaleb ◽  
Arko Barman ◽  
Victor Lopez-Rivera ◽  
Songmi Lee ◽  
Farhaan Vahidy ◽  
...  

Introduction: Automated neuroimaging analysis is being used increasingly in the acute ischemic stroke (AIS) evaluation. However, current algorithms do not factor in an assessment of intracranial hemorrhage (ICH) in the workflow. In this study we present a machine learning (ML) algorithm that uses brain symmetry information to detect ICH. Methods: We prospectively collected non-contrast CT (NCCT) images on patients that presented to the Emergency Department for AIS evaluation between 2017 and 2019. Patients were included if they underwent technically adequate NCCT imaging. Diagnoses of ICH, AIS and non-stroke were confirmed by experienced neuroradiologists as well as review of the clinical record. A ML algorithm which integrates symmetry features as well as standard features for the whole brain was trained on 80% of the sample and validated on the remaining images. Training was performed without any prior segmentation. Evaluation of the model performance was conducted using receiver-operator curve and area under the curve (AUC) analysis. Results are given as median [IQR] and [AUC 95% CI]. Results: Among the 568 patients that met inclusion criteria, median age was 65 [55-76], 47% were female and 34% were white. 128 (23%) patients were determined to have ICH and 440 as non-ICH (70% AIS and 30% non-stroke). Among ICH patients, 108 (84%) had a supratentorial ICH. When analyzing the regions of the CT images that most strongly contributed to the algorithm’s diagnostic decisions, they corresponded with the regions of ICH (Fig. 1A). On the external validation data set, the algorithm successfully detected ICH (Fig. 1B) with high accuracy (AUC 0.99 [0.97-1.00]). Conclusion: We have developed a symmetry-sensitive ML method that can with very high fidelity identify ICH in an automated fashion. Without prior training, the algorithm autonomously was able to learn ICH location. These results may help contribute to an automated imaging workflow for all stroke evaluations, not just AIS.


2021 ◽  
pp. 1-29
Author(s):  
Eric Sonny Mathew ◽  
Moussa Tembely ◽  
Waleed AlAmeri ◽  
Emad W. Al-Shalabi ◽  
Abdul Ravoof Shaik

Two of the most critical properties for multiphase flow in a reservoir are relative permeability (Kr) and capillary pressure (Pc). To determine these parameters, careful interpretation of coreflooding and centrifuge experiments is necessary. In this work, a machine learning (ML) technique was incorporated to assist in the determination of these parameters quickly and synchronously for steady-state drainage coreflooding experiments. A state-of-the-art framework was developed in which a large database of Kr and Pc curves was generated based on existing mathematical models. This database was used to perform thousands of coreflood simulation runs representing oil-water drainage steady-state experiments. The results obtained from the corefloods including pressure drop and water saturation profile, along with other conventional core analysis data, were fed as features into the ML model. The entire data set was split into 70% for training, 15% for validation, and the remaining 15% for the blind testing of the model. The 70% of the data set for training teaches the model to capture fluid flow behavior inside the core, and then 15% of the data set was used to validate the trained model and to optimize the hyperparameters of the ML algorithm. The remaining 15% of the data set was used for testing the model and assessing the model performance scores. In addition, K-fold split technique was used to split the 15% testing data set to provide an unbiased estimate of the final model performance. The trained/tested model was thereby used to estimate Kr and Pc curves based on available experimental results. The values of the coefficient of determination (R2) were used to assess the accuracy and efficiency of the developed model. The respective crossplots indicate that the model is capable of making accurate predictions with an error percentage of less than 2% on history matching experimental data. This implies that the artificial-intelligence- (AI-) based model is capable of determining Kr and Pc curves. The present work could be an alternative approach to existing methods for interpreting Kr and Pc curves. In addition, the ML model can be adapted to produce results that include multiple options for Kr and Pc curves from which the best solution can be determined using engineering judgment. This is unlike solutions from some of the existing commercial codes, which usually provide only a single solution. The model currently focuses on the prediction of Kr and Pc curves for drainage steady-state experiments; however, the work can be extended to capture the imbibition cycle as well.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lara Lloret Iglesias ◽  
Pablo Sanz Bellón ◽  
Amaia Pérez del Barrio ◽  
Pablo Menéndez Fernández-Miranda ◽  
David Rodríguez González ◽  
...  

AbstractDeep learning is nowadays at the forefront of artificial intelligence. More precisely, the use of convolutional neural networks has drastically improved the learning capabilities of computer vision applications, being able to directly consider raw data without any prior feature extraction. Advanced methods in the machine learning field, such as adaptive momentum algorithms or dropout regularization, have dramatically improved the convolutional neural networks predicting ability, outperforming that of conventional fully connected neural networks. This work summarizes, in an intended didactic way, the main aspects of these cutting-edge techniques from a medical imaging perspective.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1421
Author(s):  
Gergo Pinter ◽  
Amir Mosavi ◽  
Imre Felde

Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers’ entropy, worker gyration, dwellers’ work distance, and workers’ home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott’s index (WI). The proposed model showed promising results revealing that the workers’ entropy and the dwellers’ work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers’ gyration, and the workers’ home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices.


2020 ◽  
Vol 21 (4) ◽  
pp. 1119-1135 ◽  
Author(s):  
Shutao Mei ◽  
Fuyi Li ◽  
André Leier ◽  
Tatiana T Marquez-Lago ◽  
Kailin Giam ◽  
...  

Abstract Human leukocyte antigen class I (HLA-I) molecules are encoded by major histocompatibility complex (MHC) class I loci in humans. The binding and interaction between HLA-I molecules and intracellular peptides derived from a variety of proteolytic mechanisms play a crucial role in subsequent T-cell recognition of target cells and the specificity of the immune response. In this context, tools that predict the likelihood for a peptide to bind to specific HLA class I allotypes are important for selecting the most promising antigenic targets for immunotherapy. In this article, we comprehensively review a variety of currently available tools for predicting the binding of peptides to a selection of HLA-I allomorphs. Specifically, we compare their calculation methods for the prediction score, employed algorithms, evaluation strategies and software functionalities. In addition, we have evaluated the prediction performance of the reviewed tools based on an independent validation data set, containing 21 101 experimentally verified ligands across 19 HLA-I allotypes. The benchmarking results show that MixMHCpred 2.0.1 achieves the best performance for predicting peptides binding to most of the HLA-I allomorphs studied, while NetMHCpan 4.0 and NetMHCcons 1.1 outperform the other machine learning-based and consensus-based tools, respectively. Importantly, it should be noted that a peptide predicted with a higher binding score for a specific HLA allotype does not necessarily imply it will be immunogenic. That said, peptide-binding predictors are still very useful in that they can help to significantly reduce the large number of epitope candidates that need to be experimentally verified. Several other factors, including susceptibility to proteasome cleavage, peptide transport into the endoplasmic reticulum and T-cell receptor repertoire, also contribute to the immunogenicity of peptide antigens, and some of them can be considered by some predictors. Therefore, integrating features derived from these additional factors together with HLA-binding properties by using machine-learning algorithms may increase the prediction accuracy of immunogenic peptides. As such, we anticipate that this review and benchmarking survey will assist researchers in selecting appropriate prediction tools that best suit their purposes and provide useful guidelines for the development of improved antigen predictors in the future.


2019 ◽  
Vol 27 (3) ◽  
pp. 396-406 ◽  
Author(s):  
Kushan De Silva ◽  
Daniel Jönsson ◽  
Ryan T Demmer

Abstract Objective To identify predictors of prediabetes using feature selection and machine learning on a nationally representative sample of the US population. Materials and Methods We analyzed n = 6346 men and women enrolled in the National Health and Nutrition Examination Survey 2013–2014. Prediabetes was defined using American Diabetes Association guidelines. The sample was randomly partitioned to training (n = 3174) and internal validation (n = 3172) sets. Feature selection algorithms were run on training data containing 156 preselected exposure variables. Four machine learning algorithms were applied on 46 exposure variables in original and resampled training datasets built using 4 resampling methods. Predictive models were tested on internal validation data (n = 3172) and external validation data (n = 3000) prepared from National Health and Nutrition Examination Survey 2011–2012. Model performance was evaluated using area under the receiver operating characteristic curve (AUROC). Predictors were assessed by odds ratios in logistic models and variable importance in others. The Centers for Disease Control (CDC) prediabetes screening tool was the benchmark to compare model performance. Results Prediabetes prevalence was 23.43%. The CDC prediabetes screening tool produced 64.40% AUROC. Seven optimal (≥ 70% AUROC) models identified 25 predictors including 4 potentially novel associations; 20 by both logistic and other nonlinear/ensemble models and 5 solely by the latter. All optimal models outperformed the CDC prediabetes screening tool (P < 0.05). Discussion Combined use of feature selection and machine learning increased predictive performance outperforming the recommended screening tool. A range of predictors of prediabetes was identified. Conclusion This work demonstrated the value of combining feature selection with machine learning to identify a wide range of predictors that could enhance prediabetes prediction and clinical decision-making.


2021 ◽  
Vol 12 ◽  
Author(s):  
Marco Camardo Leggieri ◽  
Marco Mazzoni ◽  
Paola Battilani

Meteorological conditions are the main driving variables for mycotoxin-producing fungi and the resulting contamination in maize grain, but the cropping system used can mitigate this weather impact considerably. Several researchers have investigated cropping operations’ role in mycotoxin contamination, but these findings were inconclusive, precluding their use in predictive modeling. In this study a machine learning (ML) approach was considered, which included weather-based mechanistic model predictions for AFLA-maize and FER-maize [predicting aflatoxin B1 (AFB1) and fumonisins (FBs), respectively], and cropping system factors as the input variables. The occurrence of AFB1 and FBs in maize fields was recorded, and their corresponding cropping system data collected, over the years 2005–2018 in northern Italy. Two deep neural network (DNN) models were trained to predict, at harvest, which maize fields were contaminated beyond the legal limit with AFB1 and FBs. Both models reached an accuracy >75% demonstrating the ML approach added value with respect to classical statistical approaches (i.e., simple or multiple linear regression models). The improved predictive performance compared with that obtained for AFLA-maize and FER-maize was clearly demonstrated. This coupled to the large data set used, comprising a 13-year time series, and the good results for the statistical scores applied, together confirmed the robustness of the models developed here.


Different mathematical models, Artificial Intelligence approach and Past recorded data set is combined to formulate Machine Learning. Machine Learning uses different learning algorithms for different types of data and has been classified into three types. The advantage of this learning is that it uses Artificial Neural Network and based on the error rates, it adjusts the weights to improve itself in further epochs. But, Machine Learning works well only when the features are defined accurately. Deciding which feature to select needs good domain knowledge which makes Machine Learning developer dependable. The lack of domain knowledge affects the performance. This dependency inspired the invention of Deep Learning. Deep Learning can detect features through self-training models and is able to give better results compared to using Artificial Intelligence or Machine Learning. It uses different functions like ReLU, Gradient Descend and Optimizers, which makes it the best thing available so far. To efficiently apply such optimizers, one should have the knowledge of mathematical computations and convolutions running behind the layers. It also uses different pooling layers to get the features. But these Modern Approaches need high level of computation which requires CPU and GPUs. In case, if, such high computational power, if hardware is not available then one can use Google Colaboratory framework. The Deep Learning Approach is proven to improve the skin cancer detection as demonstrated in this paper. The paper also aims to provide the circumstantial knowledge to the reader of various practices mentioned above.


2020 ◽  
Author(s):  
Parmita Mehta ◽  
Christine Petersen ◽  
Joanne C. Wen ◽  
Michael R. Banitt ◽  
Philip P. Chen ◽  
...  

AbstractGlaucoma, the leading cause of irreversible blindness worldwide, is a disease that damages the optic nerve. Current machine learning (ML) approaches for glaucoma detection rely on features such as retinal thickness maps; however, the high rate of segmentation errors when creating these maps increase the likelihood of faulty diagnoses. This paper proposes a new, comprehensive, and more accurate ML-based approach for population-level glaucoma screening. Our contributions include: (1) a multi-modal model built upon a large data set that includes demographic, systemic and ocular data as well as raw image data taken from color fundus photos (CFPs) and macular Optical Coherence Tomography (OCT) scans, (2) model interpretation to identify and explain data features that lead to accurate model performance, and (3) model validation via comparison of model output with clinician interpretation of CFPs. We also validated the model on a cohort that was not diagnosed with glaucoma at the time of imaging but eventually received a glaucoma diagnosis. Results show that our model is highly accurate (AUC 0.97) and interpretable. It validated biological features known to be related to the disease, such as age, intraocular pressure and optic disc morphology. Our model also points to previously unknown or disputed features, such as pulmonary capacity and retinal outer layers.


Sign in / Sign up

Export Citation Format

Share Document