scholarly journals Evaluating the performance of convolutional neural networks with direct acyclic graph architectures in automatic segmentation of breast lesion in US images

2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Marly Guimarães Fernandes Costa ◽  
João Paulo Mendes Campos ◽  
Gustavo de Aquino e Aquino ◽  
Wagner Coelho de Albuquerque Pereira ◽  
Cícero Ferreira Fernandes Costa Filho

Abstract Background Outlining lesion contours in Ultra Sound (US) breast images is an important step in breast cancer diagnosis. Malignant lesions infiltrate the surrounding tissue, generating irregular contours, with spiculation and angulated margins, whereas benign lesions produce contours with a smooth outline and elliptical shape. In breast imaging, the majority of the existing publications in the literature focus on using Convolutional Neural Networks (CNNs) for segmentation and classification of lesions in mammographic images. In this study our main objective is to assess the ability of CNNs in detecting contour irregularities in breast lesions in US images. Methods In this study we compare the performance of two CNNs with Direct Acyclic Graph (DAG) architecture and one CNN with a series architecture for breast lesion segmentation in US images. DAG and series architectures are both feedforward networks. The difference is that a DAG architecture could have more than one path between the first layer and end layer, whereas a series architecture has only one path from the beginning layer to the end layer. The CNN architectures were evaluated with two datasets. Results With the more complex DAG architecture, the following mean values were obtained for the metrics used to evaluate the segmented contours: global accuracy: 0.956; IOU: 0.876; F measure: 68.77%; Dice coefficient: 0.892. Conclusion The CNN DAG architecture shows the best metric values used for quantitatively evaluating the segmented contours compared with the gold-standard contours. The segmented contours obtained with this architecture also have more details and irregularities, like the gold-standard contours.

Dermatology ◽  
2021 ◽  
pp. 1-8
Author(s):  
Brigid Betz-Stablein ◽  
Brian D’Alessandro ◽  
Uyen Koh ◽  
Elsemieke Plasmeijer ◽  
Monika Janda ◽  
...  

<b><i>Background:</i></b> The number of naevi on a person is the strongest risk factor for melanoma; however, naevus counting is highly variable due to lack of consistent methodology and lack of inter-rater agreement. Machine learning has been shown to be a valuable tool for image classification in dermatology. <b><i>Objectives:</i></b> To test whether automated, reproducible naevus counts are possible through the combination of convolutional neural networks (CNN) and three-dimensional (3D) total body imaging. <b><i>Methods:</i></b> Total body images from a study of naevi in the general population were used for the training (82 subjects, 57,742 lesions) and testing (10 subjects; 4,868 lesions) datasets for the development of a CNN. Lesions were labelled as naevi, or not (“non-naevi”), by a senior dermatologist as the gold standard. Performance of the CNN was assessed using sensitivity, specificity, and Cohen’s kappa, and evaluated at the lesion level and person level. <b><i>Results:</i></b> Lesion-level analysis comparing the automated counts to the gold standard showed a sensitivity and specificity of 79% (76–83%) and 91% (90–92%), respectively, for lesions ≥2 mm, and 84% (75–91%) and 91% (88–94%) for lesions ≥5 mm. Cohen’s kappa was 0.56 (0.53–0.59) indicating moderate agreement for naevi ≥2 mm, and substantial agreement (0.72, 0.63–0.80) for naevi ≥5 mm. For the 10 individuals in the test set, person-level agreement was assessed as categories with 70% agreement between the automated and gold standard counts. Agreement was lower in subjects with numerous seborrhoeic keratoses. <b><i>Conclusion:</i></b> Automated naevus counts with reasonable agreement to those of an expert clinician are possible through the combination of 3D total body photography and CNNs. Such an algorithm may provide a faster, reproducible method over the traditional in person total body naevus counts.


2021 ◽  
Author(s):  
Mira S. Davidson ◽  
Sabrina Yahiya ◽  
Jill Chmielewski ◽  
Aidan J. O’Donnell ◽  
Pratima Gurung ◽  
...  

AbstractMicroscopic examination of blood smears remains the gold standard for diagnosis and laboratory studies with malaria. Inspection of smears is, however, a tedious manual process dependent on trained microscopists with results varying in accuracy between individuals, given the heterogeneity of parasite cell form and disagreement on nomenclature. To address this, we sought to develop an automated image analysis method that improves accuracy and standardisation of cytological smear inspection but retains the capacity for expert confirmation and archiving of images. Here we present a machine-learning method that achieves red blood cell (RBC) detection, differentiation between infected and uninfected RBCs and parasite life stage categorisation from raw, unprocessed heterogeneous images of thin blood films. The method uses a pre-trained Faster Region-Based Convolutional Neural Networks (R-CNN) model for RBC detection that performs accurately, with an average precision of 0.99 at an intersection-over-union threshold of 0.5. A residual neural network (ResNet)-50 model applied to detect infection in segmented RBCs also performs accurately, with an area under the receiver operating characteristic curve of 0.98. Lastly, using a regression model our method successfully recapitulates intra-erythrocytic developmental cycle (IDC) stages with accurate categorisation (ring, trophozoite, schizont), as well as differentiating asexual stages from gametocytes. To accelerate our method’s utility, we have developed a mobile-friendly web-based interface, PlasmoCount, which is capable of automated detection and staging of malaria parasites from uploaded heterogeneous input images of Giemsa-stained thin blood smears. Results gained using either laboratory or phone-based images permit rapid navigation through and review of results for quality assurance. By standardising the assessment of parasite development from microscopic blood smears, PlasmoCount markedly improves user consistency and reproducibility and thereby presents a realistic route to automating the gold standard of field-based malaria diagnosis.Significance StatementMicroscopy inspection of Giemsa-stained thin blood smears on glass slides has been used in the diagnosis of malaria and monitoring of malaria cultures in laboratory settings for >100 years. Manual evaluation is, however, time-consuming, error-prone and subjective with no currently available tool that permits reliable automated counting and archiving of Giemsa-stained images. Here, we present a machine learning method for automated detection and staging of parasite infected red cells from heterogeneous smears. Our method calculates parasitaemia and frequency data on the malaria parasite intraerythrocytic development cycle directly from raw images, standardizing smear assessment and providing reproducible and archivable results. Developed into a web tool, PlasmoCount, this method provides improved standardisation of smear inspection for malaria research and potentially field diagnosis.


Iproceedings ◽  
10.2196/35437 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e35437
Author(s):  
Raluca Jalaboi ◽  
Mauricio Orbes Arteaga ◽  
Dan Richter Jørgensen ◽  
Ionela Manole ◽  
Oana Ionescu Bozdog ◽  
...  

Background Convolutional neural networks (CNNs) are regarded as state-of-the-art artificial intelligence (AI) tools for dermatological diagnosis, and they have been shown to achieve expert-level performance when trained on a representative dataset. CNN explainability is a key factor to adopting such techniques in practice and can be achieved using attention maps of the network. However, evaluation of CNN explainability has been limited to visual assessment and remains qualitative, subjective, and time consuming. Objective This study aimed to provide a framework for an objective quantitative assessment of the explainability of CNNs for dermatological diagnosis benchmarks. Methods We sourced 566 images available under the Creative Commons license from two public datasets—DermNet NZ and SD-260, with reference diagnoses of acne, actinic keratosis, psoriasis, seborrheic dermatitis, viral warts, and vitiligo. Eight dermatologists with teledermatology expertise annotated each clinical image with a diagnosis, as well as diagnosis-supporting characteristics and their localization. A total of 16 supporting visual characteristics were selected, including basic terms such as macule, nodule, papule, patch, plaque, pustule, and scale, and additional terms such as closed comedo, cyst, dermatoglyphic disruption, leukotrichia, open comedo, scar, sun damage, telangiectasia, and thrombosed capillary. The resulting dataset consisted of 525 images with three rater annotations for each. Explainability of two fine-tuned CNN models, ResNet-50 and EfficientNet-B4, was analyzed with respect to the reference explanations provided by the dermatologists. Both models were pretrained on the ImageNet natural image recognition dataset and fine-tuned using 3214 images of the six target skin conditions obtained from an internal clinical dataset. CNN explanations were obtained as activation maps of the models through gradient-weighted class-activation maps. We computed the fuzzy sensitivity and specificity of each characteristic attention map with regard to both the fuzzy gold standard characteristic attention fusion masks and the fuzzy union of all characteristics. Results On average, explainability of EfficientNet-B4 was higher than that of ResNet-50 in terms of sensitivity for 13 of 16 supporting characteristics, with mean values of 0.24 (SD 0.07) and 0.16 (SD 0.05), respectively. However, explainability was lower in terms of specificity, with mean values of 0.82 (SD 0.03) and 0.90 (SD 0.00) for EfficientNet-B4 and ResNet-50, respectively. All measures were within the range of corresponding interrater metrics. Conclusions We objectively benchmarked the explainability power of dermatological diagnosis models through the use of expert-defined supporting characteristics for diagnosis. Acknowledgments This work was supported in part by the Danish Innovation Fund under Grant 0153-00154A. Conflict of Interest None declared.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3999
Author(s):  
Arthur Cartel Foahom Gouabou ◽  
Jean-Luc Damoiseaux ◽  
Jilliana Monnier ◽  
Rabah Iguernaissi ◽  
Abdellatif Moudafi ◽  
...  

The early detection of melanoma is the most efficient way to reduce its mortality rate. Dermatologists achieve this task with the help of dermoscopy, a non-invasive tool allowing the visualization of patterns of skin lesions. Computer-aided diagnosis (CAD) systems developed on dermoscopic images are needed to assist dermatologists. These systems rely mainly on multiclass classification approaches. However, the multiclass classification of skin lesions by an automated system remains a challenging task. Decomposing a multiclass problem into a binary problem can reduce the complexity of the initial problem and increase the overall performance. This paper proposes a CAD system to classify dermoscopic images into three diagnosis classes: melanoma, nevi, and seborrheic keratosis. We introduce a novel ensemble scheme of convolutional neural networks (CNNs), inspired by decomposition and ensemble methods, to improve the performance of the CAD system. Unlike conventional ensemble methods, we use a directed acyclic graph to aggregate binary CNNs for the melanoma detection task. On the ISIC 2018 public dataset, our method achieves the best balanced accuracy (76.6%) among multiclass CNNs, an ensemble of multiclass CNNs with classical aggregation methods, and other related works. Our results reveal that the directed acyclic graph is a meaningful approach to develop a reliable and robust automated diagnosis system for the multiclass classification of dermoscopic images.


Sign in / Sign up

Export Citation Format

Share Document