NIMG-71. IDENTIFYING CLINICALLY APPLICABLE MACHINE LEARNING ALGORITHMS FOR GLIOMA SEGMENTATION USING A SYSTEMATIC LITERATURE REVIEW

2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi145-vi145
Author(s):  
Niklas Tillmanns ◽  
Avery Lum ◽  
W R Brim ◽  
Harry Subramanian ◽  
Ming Lin ◽  
...  

Abstract PURPOSE Nowadays Machine learning (ML) algorithms are often used for segmentation of gliomas, but which algorithms provide the most accurate method for implementation into clinical practice has not fully been identified. We performed a systematic review of the literature to characterize the methods used for glioma segmentation and their accuracy. METHODS In accordance to PRISMA, a literature review was performed on four databases, Ovid Embase, Ovid MEDLINE, Cochrane trials (CENTRAL) and Web of science core-collection first in October 2020 and in February 2021. Keywords and controlled vocabulary included artificial intelligence, machine learning, deep learning, radiomics, magnetic resonance imaging, glioma, and glioblastoma. Publications were screened in Covidence and the bias analysis was done in agreement with TRIPOD. RESULTS Sixty-six articles were used for data extraction. BRATS and TCIA datasets were used in 36.6% of all studies, with average number of patients being 141 (range: 1 to 622). ML methods represented 45.3% of studies, with deep learning used in 54.7%; Dice score for the tumor core ranged from 0.72 to 0.95. The most common algorithm used in the machine learning papers was support vector machines (SVM) and for deep learning papers, it was Convolutional Neural Networks (CNN). Preliminary TRIPOD analysis yielded an average score from 12 (range: 7-16) with the majority of papers demonstrating deficiencies in description of the ML algorithm, funding role, data acquisition and measures of model performance. CONCLUSION In the last years, many articles were published on segmentation of gliomas using machine learning, thus establishing this method for tumor segmentation with high accuracy. However, the major limitations for clinically applicable use of ML in glioma segmentation include more than one-third of publications use the same datasets, thus limiting generalizability, increase the likelihood of overfitting, show and lack of ML network description and standardization in accuracy reporting.

2021 ◽  
Vol 3 (Supplement_3) ◽  
pp. iii17-iii17
Author(s):  
Waverly Rose Brim ◽  
Leon Jekel ◽  
Gabriel Cassinelli Petersen ◽  
Harry Subramanian ◽  
Tal Zeevi ◽  
...  

Abstract Purpose Medical staging, surgical planning, and therapeutic decisions are significantly different for brain metastases versus gliomas. Machine learning (ML) algorithms have been developed to differentiate these pathologies. We performed a systematic review to characterize ML methods and to evaluate their accuracy. Methods Studies on the application of machine learning in neuro-oncology were searched in Ovid Embase, Ovid MEDLINE, Cochrane trials (CENTRAL) and Web of science core-collection. A search strategy was designed in compliance with a clinical librarian and confirmed by a second librarian. The search strategy comprised of controlled vocabulary including artificial intelligence, machine learning, deep learning, magnetic resonance imaging, and glioma. The initial search was performed in October 2020 and then updated in February 2021. Candidate articles were screened in Covidence by at least two reviewers each. A bias analysis was conducted in agreement with TRIPOD, a bias assessment tool similar to CLAIM. Results Twenty-nine articles were used for data extraction. Four articles specified model development for solitary brain metastases. Classical ML (cML) algorithms represented 85% of models used, while deep learning (DL) accounted for 15%. cML algorithms performed with an average accuracy, sensitivity, and specificity of 82%, 78%, 88%, respectively; DL performed 84%, 79%, 81%. The support vector machine (SVM) algorithm was the most common used cML model in the literature and convolutional neural networks (CNN) were standard for DL models. We also found T1, T1 post-gadolinium and T2 sequences were most commonly used for feature extraction. Preliminary TRIPOD analysis yielded an average score of 14.25 (range 8–18). Conclusion ML algorithms that can accurately classify glioma from brain metastases have been developed. SVM and CNN are leading approaches with high accuracy. Standardized algorithm performance reporting is a clear limitation to be addressed in future studies.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi144-vi145
Author(s):  
Gabriel Cassinelli Petersen ◽  
Julia Shatalov ◽  
W R Brim ◽  
Harry Subramanian ◽  
Jin cui ◽  
...  

Abstract PURPOSE Differentiating gliomas and Primary CNS Lymphomas (PCNSL) represents a diagnostic challenge with important therapeutic ramifications. MR imaging combined with Machine Learning (ML) has shown promising results in differentiating tumors non-invasively. The purpose of this systematic review is to evaluate and synthesize the findings on the application of ML in differentiating PCNSL and gliomas. MATERIALS AND METHODS A systematic search of literature was performed in October 2020 and February 2021 on Ovid Embase, Ovid MEDLINE, Cochrane trials, and Web of Science – Core Collection. The search strategy included keywords and controlled vocabulary including the terms: gliomas, artificial intelligence, machine learning, and related terms. Publications were reviewed and screened by four different reviewers in accordance with TRIPOD. RESULTS The literature search yielded 11,727 studies and 1,135 underwent full-text review. Data was extracted from 16 publications showing that 10 ML and 3 deep learning (DL) algorithms were tested. The analyzed databases had an average size of 118 patients per study. 50% of the publications validated the algorithm in an independent test cohort. The most commonly tested ML and DL algorithms were support vector machines and Convolutional Neural Networks, respectively. In internal (external) datasets, ML algorithms reached an average AUC of 89% (83%); and DL 74% (77%). Preliminary TRIPOD bias analysis yielded an average score of 0.5 (range 0.31-0.62), with most papers showing deficiencies in reporting model specifications, and funding details among other items. CONCLUSIONS AI-based methods for differentiating gliomas and PCNSL have been reported and show that ML methods result in accuracy = > 85%.With few studies using DL algorithms, further research into novel DL-based approaches is recommended. Additionally, most studies lack large datasets and external validation, thus increasing the risk of overfitting. Bias analysis of the published studies using TRIPOD identified reporting deficiencies, and close adherence to reporting criteria is recommended.


2021 ◽  
Vol 13 (3) ◽  
pp. 67
Author(s):  
Eric Hitimana ◽  
Gaurav Bajpai ◽  
Richard Musabe ◽  
Louis Sibomana ◽  
Jayavel Kayalvizhi

Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application.


Author(s):  
Adwait Patil

Abstract: Alzheimer’s disease is one of the neurodegenerative disorders. It initially starts with innocuous symptoms but gradually becomes severe. This disease is so dangerous because there is no treatment, the disease is detected but typically at a later stage. So it is important to detect Alzheimer at an early stage to counter the disease and for a probable recovery for the patient. There are various approaches currently used to detect symptoms of Alzheimer’s disease (AD) at an early stage. The fuzzy system approach is not widely used as it heavily depends on expert knowledge but is quite efficient in detecting AD as it provides a mathematical foundation for interpreting the human cognitive processes. Another more accurate and widely accepted approach is the machine learning detection of AD stages which uses machine learning algorithms like Support Vector Machines (SVMs) , Decision Tree , Random Forests to detect the stage depending on the data provided. The final approach is the Deep Learning approach using multi-modal data that combines image , genetic data and patient data using deep models and then uses the concatenated data to detect the AD stage more efficiently; this method is obscure as it requires huge volumes of data. This paper elaborates on all the three approaches and provides a comparative study about them and which method is more efficient for AD detection. Keywords: Alzheimer’s Disease (AD), Fuzzy System , Machine Learning , Deep Learning , Multimodal data


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi139-vi139
Author(s):  
Jan Lost ◽  
Tej Verma ◽  
Niklas Tillmanns ◽  
W R Brim ◽  
Harry Subramanian ◽  
...  

Abstract PURPOSE Identifying molecular subtypes in gliomas has prognostic and therapeutic value, traditionally after invasive neurosurgical tumor resection or biopsy. Recent advances using artificial intelligence (AI) show promise in using pre-therapy imaging for predicting molecular subtype. We performed a systematic review of recent literature on AI methods used to predict molecular subtypes of gliomas. METHODS Literature review conforming to PRSIMA guidelines was performed for publications prior to February 2021 using 4 databases: Ovid Embase, Ovid MEDLINE, Cochrane trials (CENTRAL), and Web of Science core-collection. Keywords included: artificial intelligence, machine learning, deep learning, radiomics, magnetic resonance imaging, glioma, and glioblastoma. Non-machine learning and non-human studies were excluded. Screening was performed using Covidence software. Bias analysis was done using TRIPOD guidelines. RESULTS 11,727 abstracts were retrieved. After applying initial screening exclusion criteria, 1,135 full text reviews were performed, with 82 papers remaining for data extraction. 57% used retrospective single center hospital data, 31.6% used TCIA and BRATS, and 11.4% analyzed multicenter hospital data. An average of 146 patients (range 34-462 patients) were included. Algorithms predicting IDH status comprised 51.8% of studies, MGMT 18.1%, and 1p19q 6.0%. Machine learning methods were used in 71.4%, deep learning in 27.4%, and 1.2% directly compared both methods. The most common algorithm for machine learning were support vector machine (43.3%), and for deep learning convolutional neural network (68.4%). Mean prediction accuracy was 76.6%. CONCLUSION Machine learning is the predominant method for image-based prediction of glioma molecular subtypes. Major limitations include limited datasets (60.2% with under 150 patients) and thus limited generalizability of findings. We recommend using larger annotated datasets for AI network training and testing in order to create more robust AI algorithms, which will provide better prediction accuracy to real world clinical datasets and provide tools that can be translated to clinical practice.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Hasan Alkahtani ◽  
Theyazn H. H. Aldhyani ◽  
Mohammed Al-Yaari

Telecommunication has registered strong and rapid growth in the past decade. Accordingly, the monitoring of computers and networks is too complicated for network administrators. Hence, network security represents one of the biggest serious challenges that can be faced by network security communities. Taking into consideration the fact that e-banking, e-commerce, and business data will be shared on the computer network, these data may face a threat from intrusion. The purpose of this research is to propose a methodology that will lead to a high level and sustainable protection against cyberattacks. In particular, an adaptive anomaly detection framework model was developed using deep and machine learning algorithms to manage automatically-configured application-level firewalls. The standard network datasets were used to evaluate the proposed model which is designed for improving the cybersecurity system. The deep learning based on Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) and machine learning algorithms namely Support Vector Machine (SVM), K-Nearest Neighbor (K-NN) algorithms were implemented to classify the Denial-of-Service attack (DoS) and Distributed Denial-of-Service (DDoS) attacks. The information gain method was applied to select the relevant features from the network dataset. These network features were significant to improve the classification algorithm. The system was used to classify DoS and DDoS attacks in four stand datasets namely KDD cup 199, NSL-KDD, ISCX, and ICI-ID2017. The empirical results indicate that the deep learning based on the LSTM-RNN algorithm has obtained the highest accuracy. The proposed system based on the LSTM-RNN algorithm produced the highest testing accuracy rate of 99.51% and 99.91% with respect to KDD Cup’99, NSL-KDD, ISCX, and ICI-Id2017 datasets, respectively. A comparative result analysis between the machine learning algorithms, namely SVM and KNN, and the deep learning algorithms based on the LSTM-RNN model is presented. Finally, it is concluded that the LSTM-RNN model is efficient and effective to improve the cybersecurity system for detecting anomaly-based cybersecurity.


Author(s):  
Christian Knaak ◽  
Moritz Kröger ◽  
Frederic Schulze ◽  
Peter Abels ◽  
Arnold Gillner

An effective process monitoring strategy is a requirement for meeting the challenges posed by increasingly complex products and manufacturing processes. To address these needs, this study investigates a comprehensive scheme based on classical machine learning methods, deep learning algorithms, and feature extraction and selection techniques. In a first step, a novel deep learning architecture based on convolutional neural networks (CNN) and gated recurrent units (GRU) is introduced to predict the local weld quality based on mid-wave infrared (MWIR) and near-infrared (NIR) image data. The developed technology is used to discover critical welding defects including lack of fusion (false friends), sagging and lack of penetration, and geometric deviations of the weld seam. Additional work is conducted to investigate the significance of various geometrical, statistical, and spatio-temporal features extracted from the keyhole and weld pool regions. Furthermore, the performance of the proposed deep learning architecture is compared to that of classical supervised machine learning algorithms, such as multi-layer perceptron (MLP), logistic regression (LogReg), support vector machines (SVM), decision trees (DT), random forest (RF) and k-Nearest Neighbors (kNN). Optimal hyperparameters for each algorithm are determined by an extensive grid search. Ultimately, the three best classification models are combined into an ensemble classifier that yields the highest detection rates and achieves the most robust estimation of welding defects among all classifiers studied, which is validated on previously unknown welding trials.


Author(s):  
S. Kuikel ◽  
B. Upadhyay ◽  
D. Aryal ◽  
S. Bista ◽  
B. Awasthi ◽  
...  

Abstract. Individual Tree Crown (ITC) delineation from aerial imageries plays an important role in forestry management and precision farming. Several conventional as well as machine learning and deep learning algorithms have been recently used in ITC detection purpose. In this paper, we present Convolutional Neural Network (CNN) and Support Vector Machine (SVM) as the deep learning and machine learning algorithms along with conventional methods of classification such as Object Based Image Analysis (OBIA) and Nearest Neighborhood (NN) classification for banana tree delineation. The comparison was done based by considering two cases; Firstly, every single classifier was compared by feeding the image with height information to see the effect of height in banana tree delineation. Secondly, individual classifiers were compared quantitatively and qualitatively based on five metrices i.e., Overall Accuracy, Recall, Precision, F-Score, and Intersection Over Union (IoU) and best classifier was determined. The result shows that there are no significant differences in the metrices when height information was fed as there were banana tree of almost similar height in the farm. The result as discussed in quantitative and qualitative analysis showed that the CNN algorithm out performed SVM, OBIA and NN techniques for crown delineation in term of performance measures.


2021 ◽  
Vol 11 ◽  
Author(s):  
Jiejie Zhou ◽  
Yan-Lin Liu ◽  
Yang Zhang ◽  
Jeon-Hor Chen ◽  
Freddie J. Combs ◽  
...  

BackgroundA wide variety of benign and malignant processes can manifest as non-mass enhancement (NME) in breast MRI. Compared to mass lesions, there are no distinct features that can be used for differential diagnosis. The purpose is to use the BI-RADS descriptors and models developed using radiomics and deep learning to distinguish benign from malignant NME lesions.Materials and MethodsA total of 150 patients with 104 malignant and 46 benign NME were analyzed. Three radiologists performed reading for morphological distribution and internal enhancement using the 5th BI-RADS lexicon. For each case, the 3D tumor mask was generated using Fuzzy-C-Means segmentation. Three DCE parametric maps related to wash-in, maximum, and wash-out were generated, and PyRadiomics was applied to extract features. The radiomics model was built using five machine learning algorithms. ResNet50 was implemented using three parametric maps as input. Approximately 70% of earlier cases were used for training, and 30% of later cases were held out for testing.ResultsThe diagnostic BI-RADS in the original MRI report showed that 104/104 malignant and 36/46 benign lesions had a BI-RADS score of 4A–5. For category reading, the kappa coefficient was 0.83 for morphological distribution (excellent) and 0.52 for internal enhancement (moderate). Segmental and Regional distribution were the most prominent for the malignant group, and focal distribution for the benign group. Eight radiomics features were selected by support vector machine (SVM). Among the five machine learning algorithms, SVM yielded the highest accuracy of 80.4% in training and 77.5% in testing datasets. ResNet50 had a better diagnostic performance, 91.5% in training and 83.3% in testing datasets.ConclusionDiagnosis of NME was challenging, and the BI-RADS scores and descriptors showed a substantial overlap. Radiomics and deep learning may provide a useful CAD tool to aid in diagnosis.


2020 ◽  
Vol 9 (2) ◽  
pp. 1049-1054

In this paper, we have tried to predict flight delays using different machine learning and deep learning techniques. By using such a model it can be easier to predict whether the flight will be delayed or not. Factors like ‘WeatherDelay’, ‘NASDelay’, ‘Destination’, ‘Origin’ play a vital role in this model. Using machine learning algorithms like Random Forest, Support Vector Machine (SVM) and K-Nearest Neighbors (KNN), the f1-score, precision, recall, support and accuracy have been predicted. To add to the model, Long Short-Term Memory (LSTM) RNN architecture has also been employed. In the paper, the dataset from Bureau of Transportation Statistics (BTS) of the ‘Pittsburgh’ is being used. The results computed from the above mentioned algorithms have been compared. Further, the results were visualized for various airlines to find maximum delay and AUC-ROC curve has been plotted for Random Forest Algorithm. The aim of our research work is to predict the delay so as to minimize loses and increase customer satisfaction.


Sign in / Sign up

Export Citation Format

Share Document