scholarly journals Convolutional Neural Networks for the Segmentation of Microcalcification in Mammography Imaging

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Gabriele Valvano ◽  
Gianmarco Santini ◽  
Nicola Martini ◽  
Andrea Ripoli ◽  
Chiara Iacconi ◽  
...  

Cluster of microcalcifications can be an early sign of breast cancer. In this paper, we propose a novel approach based on convolutional neural networks for the detection and segmentation of microcalcification clusters. In this work, we used 283 mammograms to train and validate our model, obtaining an accuracy of 99.99% on microcalcification detection and a false positive rate of 0.005%. Our results show how deep learning could be an effective tool to effectively support radiologists during mammograms examination.

2020 ◽  
Author(s):  
Pui Anantrasirichai ◽  
Juliet Biggs ◽  
Fabien Albino ◽  
David Bull

<p>Satellite interferometric synthetic aperture radar (InSAR) can be used for measuring surface deformation for a variety of applications. Recent satellite missions, such as Sentinel-1, produce a large amount of data, meaning that visual inspection is impractical. Here we use deep learning, which has proved successful at object detection, to overcome this problem. Initially we present the use of convolutional neural networks (CNNs) for detecting rapid deformation events, which we test on a global dataset of over 30,000 wrapped interferograms at 900 volcanoes. We compare two potential training datasets: data augmentation applied to archive examples and synthetic models. Both are able to detect true positive results, but the data augmentation approach has a false positive rate of 0.205% and the synthetic approach has a false positive rate of 0.036%.  Then, I will present an enhanced technique for measuring slow, sustained deformation over a range of scales from volcanic unrest to urban sources of deformation such as coalfields. By rewrapping cumulative time series, the detection performance is improved when the deformation rate is slow, as more fringes are generated without altering the signal to noise ratio. We adapt the method to use persistent scatterer InSAR data, which is sparse in nature,  by using spatial interpolation methods such as modified matrix completion Finally, future perspectives for machine learning applications on InSAR data will be discussed.</p>


Author(s):  
Mohammed Abdulla Salim Al Husaini ◽  
Mohamed Hadi Habaebi ◽  
Teddy Surya Gunawan ◽  
Md Rafiqul Islam ◽  
Elfatih A. A. Elsheikh ◽  
...  

AbstractBreast cancer is one of the most significant causes of death for women around the world. Breast thermography supported by deep convolutional neural networks is expected to contribute significantly to early detection and facilitate treatment at an early stage. The goal of this study is to investigate the behavior of different recent deep learning methods for identifying breast disorders. To evaluate our proposal, we built classifiers based on deep convolutional neural networks modelling inception V3, inception V4, and a modified version of the latter called inception MV4. MV4 was introduced to maintain the computational cost across all layers by making the resultant number of features and the number of pixel positions equal. DMR database was used for these deep learning models in classifying thermal images of healthy and sick patients. A set of epochs 3–30 were used in conjunction with learning rates 1 × 10–3, 1 × 10–4 and 1 × 10–5, Minibatch 10 and different optimization methods. The training results showed that inception V4 and MV4 with color images, a learning rate of 1 × 10–4, and SGDM optimization method, reached very high accuracy, verified through several experimental repetitions. With grayscale images, inception V3 outperforms V4 and MV4 by a considerable accuracy margin, for any optimization methods. In fact, the inception V3 (grayscale) performance is almost comparable to inception V4 and MV4 (color) performance but only after 20–30 epochs. inception MV4 achieved 7% faster classification response time compared to V4. The use of MV4 model is found to contribute to saving energy consumed and fluidity in arithmetic operations for the graphic processor. The results also indicate that increasing the number of layers may not necessarily be useful in improving the performance.


2020 ◽  
Author(s):  
Hryhorii Chereda ◽  
Annalen Bleckmann ◽  
Kerstin Menck ◽  
Júlia Perera-Bel ◽  
Philip Stegmaier ◽  
...  

AbstractMotivationContemporary deep learning approaches show cutting-edge performance in a variety of complex prediction tasks. Nonetheless, the application of deep learning in healthcare remains limited since deep learning methods are often considered as non-interpretable black-box models. Layer-wise Relevance Propagation (LRP) is a technique to explain decisions of deep learning methods. It is widely used to interpret Convolutional Neural Networks (CNNs) applied on image data. Recently, CNNs started to extend towards non-euclidean domains like graphs. Molecular networks are commonly represented as graphs detailing interactions between molecules. Gene expression data can be assigned to the vertices of these graphs. In other words, gene expression data can be structured by utilizing molecular network information as prior knowledge. Graph-CNNs can be applied to structured gene expression data, for example, to predict metastatic events in breast cancer. Therefore, there is a need for explanations showing which part of a molecular network is relevant for predicting an event, e.g. distant metastasis in cancer, for each individual patient.ResultsWe extended the procedure of LRP to make it available for Graph-CNN and tested its applicability on a large breast cancer dataset. We present Graph Layer-wise Relevance Propagation (GLRP) as a new method to explain the decisions made by Graph-CNNs. We demonstrate a sanity check of the developed GLRP on a hand-written digits dataset, and then applied the method on gene expression data. We show that GLRP provides patient-specific molecular subnetworks that largely agree with clinical knowledge and identify common as well as novel, and potentially druggable, drivers of tumor progression. As a result this method could be potentially highly useful on interpreting classification results on the individual patient level, as for example in precision medicine approaches or a molecular tumor board.Availabilityhttps://gitlab.gwdg.de/UKEBpublic/graph-lrphttps://frankkramer-lab.github.io/MetaRelSubNetVis/[email protected]


2021 ◽  
Author(s):  
Ying-Shi Sun ◽  
Yu-Hong Qu ◽  
Dong Wang ◽  
Yi Li ◽  
Lin Ye ◽  
...  

Abstract Background: Computer-aided diagnosis using deep learning algorithms has been initially applied in the field of mammography, but there is no large-scale clinical application.Methods: This study proposed to develop and verify an artificial intelligence model based on mammography. Firstly, retrospectively collected mammograms from six centers were randomized to a training dataset and a validation dataset for establishing the model. Secondly, the model was tested by comparing 12 radiologists’ performance with and without it. Finally, prospectively multicenter mammograms were diagnosed by radiologists with the model. The detection and diagnostic capabilities were evaluated using the free-response receiver operating characteristic (FROC) curve and ROC curve.Results: The sensitivity of model for detecting lesion after matching was 0.908 for false positive rate of 0.25 in unilateral images. The area under ROC curve (AUC) to distinguish the benign from malignant lesions was 0.855 (95% CI: 0.830, 0.880). The performance of 12 radiologists with the model was higher than that of radiologists alone (AUC: 0.852 vs. 0.808, P = 0.005). The mean reading time of with the model was shorter than that of reading alone (80.18 s vs. 62.28 s, P = 0.03). In prospective application, the sensitivity of detection reached 0.887 at false positive rate of 0.25; the AUC of radiologists with the model was 0.983 (95% CI: 0.978, 0.988), with sensitivity, specificity, PPV, and NPV of 94.36%, 98.07%, 87.76%, and 99.09%, respectively.Conclusions: The artificial intelligence model exhibits high accuracy for detecting and diagnosing breast lesions, improves diagnostic accuracy and saves time.Trial registration: NCT, NCT03708978. Registered 17 April 2018, https://register.clinicaltrials.gov/prs/app/ NCT03708978


Author(s):  
Zi Yang ◽  
Mingli Chen ◽  
Mahdieh Kazemimoghadam ◽  
Lin Ma ◽  
Strahinja Stojadinovic ◽  
...  

Abstract Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve (AUC) of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.


2021 ◽  
Vol 11 (23) ◽  
pp. 11398
Author(s):  
Salvador Castro-Tapia ◽  
Celina Lizeth Castañeda-Miranda ◽  
Carlos Alberto Olvera-Olvera ◽  
Héctor A. Guerrero-Osuna ◽  
José Manuel Ortiz-Rodriguez ◽  
...  

Breast cancer is one of the diseases of most profound concern, with the most prevalence worldwide, where early detections and diagnoses play the leading role against this disease achieved through imaging techniques such as mammography. Radiologists tend to have a high false positive rate for mammography diagnoses and an accuracy of around 82%. Currently, deep learning (DL) techniques have shown promising results in the early detection of breast cancer by generating computer-aided diagnosis (CAD) systems implementing convolutional neural networks (CNNs). This work focuses on applying, evaluating, and comparing the architectures: AlexNet, GoogLeNet, Resnet50, and Vgg19 to classify breast lesions after using transfer learning with fine-tuning and training the CNN with regions extracted from the MIAS and INbreast databases. We analyzed 14 classifiers, involving 4 classes as several researches have done it before, corresponding to benign and malignant microcalcifications and masses, and as our main contribution, we also added a 5th class for the normal tissue of the mammary parenchyma increasing the correct detection; in order to evaluate the architectures with a statistical analysis based on the received operational characteristics (ROC), the area under the curve (AUC), F1 Score, accuracy, precision, sensitivity, and specificity. We generate the best results with the CNN GoogLeNet trained with five classes on a balanced database with an AUC of 99.29%, F1 Score of 91.92%, the accuracy of 91.92%, precision of 92.15%, sensitivity of 91.70%, and specificity of 97.66%, concluding that GoogLeNet is optimal as a classifier in a CAD system to deal with breast cancer.


Trudy NAMI ◽  
2021 ◽  
pp. 37-47
Author(s):  
P. A. Vasin ◽  
I. A. Kulikov

Introduction (problem statement and relevance). This article deals with the problem of training artificial neural networks intended to analyze images of the surrounding space in automotive computer vision systems. The conventional training approach implies using loss functions that only improve the overall identification quality making no distinction between types of possible false predictions. However, traffic safety risks associated with different types of prediction errors are unequal being higher for false positive estimations.The purpose of this work is to propose improved loss functions, which include penalties for false positive predictions, and to study how using these functions affects the behavior of a convolutional neural network when estimating the drivable space.Methodology and research methods. The proposed loss functions are based on the Sørensen-Dice coefficient differing from each other in the approaches to penalizing false positive errors. The performance of the trained neural networks is evaluated using three metrics, namely, the Jaccard coefficient, False Positive Rate and False Negative Rate. The proposed solutions are compared with the conventional one by calculating the ratios of their respective metrics.Scientific novelty and results. The improved loss functions have been proposed to train computer vision algorithms featuring penalties for false positive estimations. The experimental study of the trained neural networks using a test dataset has shown that the improved loss functions allow reducing the False Positive Rate by 21%.The practical significance of this work is constituted by the proposed method of training neural networks that allows to increase the safety of automated driving through an improved accuracy of analyzing the surrounding space using computer vision systems.


2011 ◽  
Vol 45 (1) ◽  
pp. 85-86
Author(s):  
Laura Evangelista ◽  
Zora Baretta ◽  
Lorenzo Vinante ◽  
Guido Sotti

Sign in / Sign up

Export Citation Format

Share Document