scholarly journals BI-RADS Reading of Non-Mass Lesions on DCE-MRI and Differential Diagnosis Performed by Radiomics and Deep Learning

2021 ◽  
Vol 11 ◽  
Author(s):  
Jiejie Zhou ◽  
Yan-Lin Liu ◽  
Yang Zhang ◽  
Jeon-Hor Chen ◽  
Freddie J. Combs ◽  
...  

BackgroundA wide variety of benign and malignant processes can manifest as non-mass enhancement (NME) in breast MRI. Compared to mass lesions, there are no distinct features that can be used for differential diagnosis. The purpose is to use the BI-RADS descriptors and models developed using radiomics and deep learning to distinguish benign from malignant NME lesions.Materials and MethodsA total of 150 patients with 104 malignant and 46 benign NME were analyzed. Three radiologists performed reading for morphological distribution and internal enhancement using the 5th BI-RADS lexicon. For each case, the 3D tumor mask was generated using Fuzzy-C-Means segmentation. Three DCE parametric maps related to wash-in, maximum, and wash-out were generated, and PyRadiomics was applied to extract features. The radiomics model was built using five machine learning algorithms. ResNet50 was implemented using three parametric maps as input. Approximately 70% of earlier cases were used for training, and 30% of later cases were held out for testing.ResultsThe diagnostic BI-RADS in the original MRI report showed that 104/104 malignant and 36/46 benign lesions had a BI-RADS score of 4A–5. For category reading, the kappa coefficient was 0.83 for morphological distribution (excellent) and 0.52 for internal enhancement (moderate). Segmental and Regional distribution were the most prominent for the malignant group, and focal distribution for the benign group. Eight radiomics features were selected by support vector machine (SVM). Among the five machine learning algorithms, SVM yielded the highest accuracy of 80.4% in training and 77.5% in testing datasets. ResNet50 had a better diagnostic performance, 91.5% in training and 83.3% in testing datasets.ConclusionDiagnosis of NME was challenging, and the BI-RADS scores and descriptors showed a substantial overlap. Radiomics and deep learning may provide a useful CAD tool to aid in diagnosis.

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Hasan Alkahtani ◽  
Theyazn H. H. Aldhyani ◽  
Mohammed Al-Yaari

Telecommunication has registered strong and rapid growth in the past decade. Accordingly, the monitoring of computers and networks is too complicated for network administrators. Hence, network security represents one of the biggest serious challenges that can be faced by network security communities. Taking into consideration the fact that e-banking, e-commerce, and business data will be shared on the computer network, these data may face a threat from intrusion. The purpose of this research is to propose a methodology that will lead to a high level and sustainable protection against cyberattacks. In particular, an adaptive anomaly detection framework model was developed using deep and machine learning algorithms to manage automatically-configured application-level firewalls. The standard network datasets were used to evaluate the proposed model which is designed for improving the cybersecurity system. The deep learning based on Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) and machine learning algorithms namely Support Vector Machine (SVM), K-Nearest Neighbor (K-NN) algorithms were implemented to classify the Denial-of-Service attack (DoS) and Distributed Denial-of-Service (DDoS) attacks. The information gain method was applied to select the relevant features from the network dataset. These network features were significant to improve the classification algorithm. The system was used to classify DoS and DDoS attacks in four stand datasets namely KDD cup 199, NSL-KDD, ISCX, and ICI-ID2017. The empirical results indicate that the deep learning based on the LSTM-RNN algorithm has obtained the highest accuracy. The proposed system based on the LSTM-RNN algorithm produced the highest testing accuracy rate of 99.51% and 99.91% with respect to KDD Cup’99, NSL-KDD, ISCX, and ICI-Id2017 datasets, respectively. A comparative result analysis between the machine learning algorithms, namely SVM and KNN, and the deep learning algorithms based on the LSTM-RNN model is presented. Finally, it is concluded that the LSTM-RNN model is efficient and effective to improve the cybersecurity system for detecting anomaly-based cybersecurity.


Author(s):  
Christian Knaak ◽  
Moritz Kröger ◽  
Frederic Schulze ◽  
Peter Abels ◽  
Arnold Gillner

An effective process monitoring strategy is a requirement for meeting the challenges posed by increasingly complex products and manufacturing processes. To address these needs, this study investigates a comprehensive scheme based on classical machine learning methods, deep learning algorithms, and feature extraction and selection techniques. In a first step, a novel deep learning architecture based on convolutional neural networks (CNN) and gated recurrent units (GRU) is introduced to predict the local weld quality based on mid-wave infrared (MWIR) and near-infrared (NIR) image data. The developed technology is used to discover critical welding defects including lack of fusion (false friends), sagging and lack of penetration, and geometric deviations of the weld seam. Additional work is conducted to investigate the significance of various geometrical, statistical, and spatio-temporal features extracted from the keyhole and weld pool regions. Furthermore, the performance of the proposed deep learning architecture is compared to that of classical supervised machine learning algorithms, such as multi-layer perceptron (MLP), logistic regression (LogReg), support vector machines (SVM), decision trees (DT), random forest (RF) and k-Nearest Neighbors (kNN). Optimal hyperparameters for each algorithm are determined by an extensive grid search. Ultimately, the three best classification models are combined into an ensemble classifier that yields the highest detection rates and achieves the most robust estimation of welding defects among all classifiers studied, which is validated on previously unknown welding trials.


Author(s):  
S. Kuikel ◽  
B. Upadhyay ◽  
D. Aryal ◽  
S. Bista ◽  
B. Awasthi ◽  
...  

Abstract. Individual Tree Crown (ITC) delineation from aerial imageries plays an important role in forestry management and precision farming. Several conventional as well as machine learning and deep learning algorithms have been recently used in ITC detection purpose. In this paper, we present Convolutional Neural Network (CNN) and Support Vector Machine (SVM) as the deep learning and machine learning algorithms along with conventional methods of classification such as Object Based Image Analysis (OBIA) and Nearest Neighborhood (NN) classification for banana tree delineation. The comparison was done based by considering two cases; Firstly, every single classifier was compared by feeding the image with height information to see the effect of height in banana tree delineation. Secondly, individual classifiers were compared quantitatively and qualitatively based on five metrices i.e., Overall Accuracy, Recall, Precision, F-Score, and Intersection Over Union (IoU) and best classifier was determined. The result shows that there are no significant differences in the metrices when height information was fed as there were banana tree of almost similar height in the farm. The result as discussed in quantitative and qualitative analysis showed that the CNN algorithm out performed SVM, OBIA and NN techniques for crown delineation in term of performance measures.


2020 ◽  
Vol 12 (11) ◽  
pp. 1838 ◽  
Author(s):  
Zhao Zhang ◽  
Paulo Flores ◽  
C. Igathinathane ◽  
Dayakar L. Naik ◽  
Ravi Kiran ◽  
...  

The current mainstream approach of using manual measurements and visual inspections for crop lodging detection is inefficient, time-consuming, and subjective. An innovative method for wheat lodging detection that can overcome or alleviate these shortcomings would be welcomed. This study proposed a systematic approach for wheat lodging detection in research plots (372 experimental plots), which consisted of using unmanned aerial systems (UAS) for aerial imagery acquisition, manual field evaluation, and machine learning algorithms to detect the occurrence or not of lodging. UAS imagery was collected on three different dates (23 and 30 July 2019, and 8 August 2019) after lodging occurred. Traditional machine learning and deep learning were evaluated and compared in this study in terms of classification accuracy and standard deviation. For traditional machine learning, five types of features (i.e. gray level co-occurrence matrix, local binary pattern, Gabor, intensity, and Hu-moment) were extracted and fed into three traditional machine learning algorithms (i.e., random forest (RF), neural network, and support vector machine) for detecting lodged plots. For the datasets on each imagery collection date, the accuracies of the three algorithms were not significantly different from each other. For any of the three algorithms, accuracies on the first and last date datasets had the lowest and highest values, respectively. Incorporating standard deviation as a measurement of performance robustness, RF was determined as the most satisfactory. Regarding deep learning, three different convolutional neural networks (simple convolutional neural network, VGG-16, and GoogLeNet) were tested. For any of the single date datasets, GoogLeNet consistently had superior performance over the other two methods. Further comparisons between RF and GoogLeNet demonstrated that the detection accuracies of the two methods were not significantly different from each other (p > 0.05); hence, the choice of any of the two would not affect the final detection accuracies. However, considering the fact that the average accuracy of GoogLeNet (93%) was larger than RF (91%), it was recommended to use GoogLeNet for wheat lodging detection. This research demonstrated that UAS RGB imagery, coupled with the GoogLeNet machine learning algorithm, can be a novel, reliable, objective, simple, low-cost, and effective (accuracy > 90%) tool for wheat lodging detection.


2019 ◽  
Vol 27 (1) ◽  
pp. 13-21 ◽  
Author(s):  
Qiang Wei ◽  
Zongcheng Ji ◽  
Zhiheng Li ◽  
Jingcheng Du ◽  
Jingqi Wang ◽  
...  

AbstractObjectiveThis article presents our approaches to extraction of medications and associated adverse drug events (ADEs) from clinical documents, which is the second track of the 2018 National NLP Clinical Challenges (n2c2) shared task.Materials and MethodsThe clinical corpus used in this study was from the MIMIC-III database and the organizers annotated 303 documents for training and 202 for testing. Our system consists of 2 components: a named entity recognition (NER) and a relation classification (RC) component. For each component, we implemented deep learning-based approaches (eg, BI-LSTM-CRF) and compared them with traditional machine learning approaches, namely, conditional random fields for NER and support vector machines for RC, respectively. In addition, we developed a deep learning-based joint model that recognizes ADEs and their relations to medications in 1 step using a sequence labeling approach. To further improve the performance, we also investigated different ensemble approaches to generating optimal performance by combining outputs from multiple approaches.ResultsOur best-performing systems achieved F1 scores of 93.45% for NER, 96.30% for RC, and 89.05% for end-to-end evaluation, which ranked #2, #1, and #1 among all participants, respectively. Additional evaluations show that the deep learning-based approaches did outperform traditional machine learning algorithms in both NER and RC. The joint model that simultaneously recognizes ADEs and their relations to medications also achieved the best performance on RC, indicating its promise for relation extraction.ConclusionIn this study, we developed deep learning approaches for extracting medications and their attributes such as ADEs, and demonstrated its superior performance compared with traditional machine learning algorithms, indicating its uses in broader NER and RC tasks in the medical domain.


2021 ◽  
Vol 11 (4) ◽  
pp. 286-290
Author(s):  
Md. Golam Kibria ◽  
◽  
Mehmet Sevkli

The increased credit card defaulters have forced the companies to think carefully before the approval of credit applications. Credit card companies usually use their judgment to determine whether a credit card should be issued to the customer satisfying certain criteria. Some machine learning algorithms have also been used to support the decision. The main objective of this paper is to build a deep learning model based on the UCI (University of California, Irvine) data sets, which can support the credit card approval decision. Secondly, the performance of the built model is compared with the other two traditional machine learning algorithms: logistic regression (LR) and support vector machine (SVM). Our results show that the overall performance of our deep learning model is slightly better than that of the other two models.


Author(s):  
Thomas P. Trappenberg

This chapter’s goal is to show how to apply machine learning algorithms in a general setting using some classic methods. In particular, it demonstrates how to apply three important machine learning algorithms, a support vector classifier (SVC), a random forest classifier (RFC), and a multilayer perceptron (MLP). While many of the methods studied later go beyond these now classic methods, this does not mean that these methods are obsolete. Also, the algorithms discussed here provide some form of baseline to discuss advanced methods like probabilistic reasoning and deep learning. The aim here is to demonstrate that applying machine learning methods based on machine learning libraries is not very difficult. It offers an opportunity to discuss evaluation techniques that are very important in practice.


2020 ◽  
Author(s):  
Thomas R. Lane ◽  
Daniel H. Foil ◽  
Eni Minerali ◽  
Fabio Urbina ◽  
Kimberley M. Zorn ◽  
...  

<p>Machine learning methods are attracting considerable attention from the pharmaceutical industry for use in drug discovery and applications beyond. In recent studies we have applied multiple machine learning algorithms, modeling metrics and in some cases compared molecular descriptors to build models for individual targets or properties on a relatively small scale. Several research groups have used large numbers of datasets from public databases such as ChEMBL in order to evaluate machine learning methods of interest to them. The largest of these types of studies used on the order of 1400 datasets. We have now extracted well over 5000 datasets from CHEMBL for use with the ECFP6 fingerprint and comparison of our proprietary software Assay Central<sup>TM</sup> with random forest, k-Nearest Neighbors, support vector classification, naïve Bayesian, AdaBoosted decision trees, and deep neural networks (3 levels). Model performance <a>was</a> assessed using an array of five-fold cross-validation metrics including area-under-the-curve, F1 score, Cohen’s kappa and Matthews correlation coefficient. <a>Based on ranked normalized scores for the metrics or datasets all methods appeared comparable while the distance from the top indicated Assay Central<sup>TM</sup> and support vector classification were comparable. </a>Unlike prior studies which have placed considerable emphasis on deep neural networks (deep learning), no advantage was seen in this case where minimal tuning was performed of any of the methods. If anything, Assay Central<sup>TM</sup> may have been at a slight advantage as the activity cutoff for each of the over 5000 datasets representing over 570,000 unique compounds was based on Assay Central<sup>TM</sup>performance, but support vector classification seems to be a strong competitor. We also apply Assay Central<sup>TM</sup> to prospective predictions for PXR and hERG to further validate these models. This work currently appears to be the largest comparison of machine learning algorithms to date. Future studies will likely evaluate additional databases, descriptors and algorithms, as well as further refining methods for evaluating and comparing models. </p><p><b> </b></p>


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Zeynep Hilal Kilimci ◽  
Aykut Güven ◽  
Mitat Uysal ◽  
Selim Akyokus

Nowadays, smart devices as a part of daily life collect data about their users with the help of sensors placed on them. Sensor data are usually physical data but mobile applications collect more than physical data like device usage habits and personal interests. Collected data are usually classified as personal, but they contain valuable information about their users when it is analyzed and interpreted. One of the main purposes of personal data analysis is to make predictions about users. Collected data can be divided into two major categories: physical and behavioral data. Behavioral data are also named as neurophysical data. Physical and neurophysical parameters are collected as a part of this study. Physical data contains measurements of the users like heartbeats, sleep quality, energy, movement/mobility parameters. Neurophysical data contain keystroke patterns like typing speed and typing errors. Users’ emotional/mood statuses are also investigated by asking daily questions. Six questions are asked to the users daily in order to determine the mood of them. These questions are emotion-attached questions, and depending on the answers, users’ emotional states are graded. Our aim is to show that there is a connection between users’ physical/neurophysical parameters and mood/emotional conditions. To prove our hypothesis, we collect and measure physical and neurophysical parameters of 15 users for 1 year. The novelty of this work to the literature is the usage of both combinations of physical and neurophysical parameters. Another novelty is that the emotion classification task is performed by both conventional machine learning algorithms and deep learning models. For this purpose, Feedforward Neural Network (FFNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) neural network are employed as deep learning methodologies. Multinomial Naïve Bayes (MNB), Support Vector Regression (SVR), Decision Tree (DT), Random Forest (RF), and Decision Integration Strategy (DIS) are evaluated as conventional machine learning algorithms. To the best of our knowledge, this is the very first attempt to analyze the neurophysical conditions of the users by evaluating deep learning models for mood analysis and enriching physical characteristics with neurophysical parameters. Experiment results demonstrate that the utilization of deep learning methodologies and the combination of both physical and neurophysical parameters enhances the classification success of the system to interpret the mood of the users. A wide range of comparative and extensive experiments shows that the proposed model exhibits noteworthy results compared to the state-of-art studies.


Nowadays, machine learning and deep learning algorithms, are considered as new technologies increasingly used in the biomedical field. Machine learning is a branch of Artificial Intelligence that aims to automatically find patterns in existing data. A new Machine Learning subfield, the deep learning theory, has emerged. It deals with object recognition in images. In this paper, our goal is DNA Microarrays’analysis with these algorithms to classify two genes’ types. The first class represents cell cycle regulated genes and the second is non cell cycle regulated ones. In the current state of the art, the researchers are processing the numerical data associated to gene evolution to achieve this classification. Here, we propose a new and different approach, based on the microarrays images’ treatment. To classify images, we use three machine learning algorithms which are: Support Vector Machine, KNearest Neighbors and Random Forest Classifier. We also use the Convolutional Neural Network and the fully connected neural network algorithms. Experiments demonstrate that our approaches outperform the state of art by a margin of 14.73 per cent by using machine learning algorithms and a margin of 22.39 per cent by using deep learning models. Our models accomplish real time test accuracy of ~ 92.39 % at classifying using CNNand 94.73% using machine learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document