scholarly journals Conventional Machine Learning and Deep Learning Approach for Multi-Classification of Breast Cancer Histopathology Images—a Comparative Insight

2020 ◽  
Vol 33 (3) ◽  
pp. 632-654 ◽  
Author(s):  
Shallu Sharma ◽  
Rajesh Mehra
Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4373 ◽  
Author(s):  
Zabit Hameed ◽  
Sofia Zahia ◽  
Begonya Garcia-Zapirain ◽  
José Javier Aguirre ◽  
Ana María Vanegas

Breast cancer is one of the major public health issues and is considered a leading cause of cancer-related deaths among women worldwide. Its early diagnosis can effectively help in increasing the chances of survival rate. To this end, biopsy is usually followed as a gold standard approach in which tissues are collected for microscopic analysis. However, the histopathological analysis of breast cancer is non-trivial, labor-intensive, and may lead to a high degree of disagreement among pathologists. Therefore, an automatic diagnostic system could assist pathologists to improve the effectiveness of diagnostic processes. This paper presents an ensemble deep learning approach for the definite classification of non-carcinoma and carcinoma breast cancer histopathology images using our collected dataset. We trained four different models based on pre-trained VGG16 and VGG19 architectures. Initially, we followed 5-fold cross-validation operations on all the individual models, namely, fully-trained VGG16, fine-tuned VGG16, fully-trained VGG19, and fine-tuned VGG19 models. Then, we followed an ensemble strategy by taking the average of predicted probabilities and found that the ensemble of fine-tuned VGG16 and fine-tuned VGG19 performed competitive classification performance, especially on the carcinoma class. The ensemble of fine-tuned VGG16 and VGG19 models offered sensitivity of 97.73% for carcinoma class and overall accuracy of 95.29%. Also, it offered an F1 score of 95.29%. These experimental results demonstrated that our proposed deep learning approach is effective for the automatic classification of complex-natured histopathology images of breast cancer, more specifically for carcinoma images.


2021 ◽  
Vol 5 (1) ◽  
pp. 34-42
Author(s):  
Refika Sultan Doğan ◽  
Bülent Yılmaz

AbstractDetermination of polyp types requires tissue biopsy during colonoscopy and then histopathological examination of the microscopic images which tremendously time-consuming and costly. The first aim of this study was to design a computer-aided diagnosis system to classify polyp types using colonoscopy images (optical biopsy) without the need for tissue biopsy. For this purpose, two different approaches were designed based on conventional machine learning (ML) and deep learning. Firstly, classification was performed using random forest approach by means of the features obtained from the histogram of gradients descriptor. Secondly, simple convolutional neural networks (CNN) based architecture was built to train with the colonoscopy images containing colon polyps. The performances of these approaches on two (adenoma & serrated vs. hyperplastic) or three (adenoma vs. hyperplastic vs. serrated) category classifications were investigated. Furthermore, the effect of imaging modality on the classification was also examined using white-light and narrow band imaging systems. The performance of these approaches was compared with the results obtained by 3 novice and 4 expert doctors. Two-category classification results showed that conventional ML approach achieved significantly better than the simple CNN based approach did in both narrow band and white-light imaging modalities. The accuracy reached almost 95% for white-light imaging. This performance surpassed the correct classification rate of all 7 doctors. Additionally, the second task (three-category) results indicated that the simple CNN architecture outperformed both conventional ML based approaches and the doctors. This study shows the feasibility of using conventional machine learning or deep learning based approaches in automatic classification of colon types on colonoscopy images.


2021 ◽  
Vol 11 (16) ◽  
pp. 7561
Author(s):  
Umair Iqbal ◽  
Johan Barthelemy ◽  
Wanqing Li ◽  
Pascal Perez

Blockage of culverts by transported debris materials is reported as the salient contributor in originating urban flash floods. Conventional hydraulic modeling approaches had no success in addressing the problem primarily because of the unavailability of peak floods hydraulic data and the highly non-linear behavior of debris at the culvert. This article explores a new dimension to investigate the issue by proposing the use of intelligent video analytics (IVA) algorithms for extracting blockage related information. The presented research aims to automate the process of manual visual blockage classification of culverts from a maintenance perspective by remotely applying deep learning models. The potential of using existing convolutional neural network (CNN) algorithms (i.e., DarkNet53, DenseNet121, InceptionResNetV2, InceptionV3, MobileNet, ResNet50, VGG16, EfficientNetB3, NASNet) is investigated over a dataset from three different sources (i.e., images of culvert openings and blockage (ICOB), visual hydrology-lab dataset (VHD), synthetic images of culverts (SIC)) to predict the blockage in a given image. Models were evaluated based on their performance on the test dataset (i.e., accuracy, loss, precision, recall, F1 score, Jaccard Index, region of convergence (ROC) curve), floating point operations per second (FLOPs) and response times to process a single test instance. Furthermore, the performance of deep learning models was benchmarked against conventional machine learning algorithms (i.e., SVM, RF, xgboost). In addition, the idea of classifying deep visual features extracted by CNN models (i.e., ResNet50, MobileNet) using conventional machine learning approaches was also implemented in this article. From the results, NASNet was reported most efficient in classifying the blockage images with the 5-fold accuracy of 85%; however, MobileNet was recommended for the hardware implementation because of its improved response time with 5-fold accuracy comparable to NASNet (i.e., 78%). Comparable performance to standard CNN models was achieved for the case where deep visual features were classified using conventional machine learning approaches. False negative (FN) instances, false positive (FP) instances and CNN layers activation suggested that background noise and oversimplified labelling criteria were two contributing factors in the degraded performance of existing CNN algorithms. A framework for partial automation of the visual blockage classification process was proposed, given that none of the existing models was able to achieve high enough accuracy to completely automate the manual process. In addition, a detection-classification pipeline with higher blockage classification accuracy (i.e., 94%) has been proposed as a potential future direction for practical implementation.


Diagnostics ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 528
Author(s):  
Said Boumaraf ◽  
Xiabi Liu ◽  
Yuchai Wan ◽  
Zhongshu Zheng ◽  
Chokri Ferkous ◽  
...  

Breast cancer is a serious threat to women. Many machine learning-based computer-aided diagnosis (CAD) methods have been proposed for the early diagnosis of breast cancer based on histopathological images. Even though many such classification methods achieved high accuracy, many of them lack the explanation of the classification process. In this paper, we compare the performance of conventional machine learning (CML) against deep learning (DL)-based methods. We also provide a visual interpretation for the task of classifying breast cancer in histopathological images. For CML-based methods, we extract a set of handcrafted features using three feature extractors and fuse them to get image representation that would act as an input to train five classical classifiers. For DL-based methods, we adopt the transfer learning approach to the well-known VGG-19 deep learning architecture, where its pre-trained version on the large scale ImageNet, is block-wise fine-tuned on histopathological images. The evaluation of the proposed methods is carried out on the publicly available BreaKHis dataset for the magnification dependent classification of benign and malignant breast cancer and their eight sub-classes, and a further validation on KIMIA Path960, a magnification-free histopathological dataset with 20 image classes, is also performed. After providing the classification results of CML and DL methods, and to better explain the difference in the classification performance, we visualize the learned features. For the DL-based method, we intuitively visualize the areas of interest of the best fine-tuned deep neural networks using attention maps to explain the decision-making process and improve the clinical interpretability of the proposed models. The visual explanation can inherently improve the pathologist’s trust in automated DL methods as a credible and trustworthy support tool for breast cancer diagnosis. The achieved results show that DL methods outperform CML approaches where we reached an accuracy between 94.05% and 98.13% for the binary classification and between 76.77% and 88.95% for the eight-class classification, while for DL approaches, the accuracies range from 85.65% to 89.32% for the binary classification and from 63.55% to 69.69% for the eight-class classification.


2021 ◽  
Author(s):  
Yue Wang ◽  
Ye Ni ◽  
Xutao Li ◽  
Yunming Ye

Wildfires are a serious disaster, which often cause severe damages to forests and plants. Without an early detection and suitable control action, a small wildfire could grow into a big and serious one. The problem is especially fatal at night, as firefighters in general miss the chance to detect the wildfires in the very first few hours. Low-light satellites, which take pictures at night, offer an opportunity to detect night fire timely. However, previous studies identify night fires based on threshold methods or conventional machine learning approaches, which are not robust and accurate enough. In this paper, we develop a new deep learning approach, which determines night fire locations by a pixel-level classification on low-light remote sensing image. Experimental results on VIIRS data demonstrate the superiority and effectiveness of the proposed method, which outperforms conventional threshold and machine learning approaches.


2021 ◽  
Vol 12 (3) ◽  
pp. 1550-1556
Author(s):  
Ravi Kumar Y B Et.al

The current research work encompasses the assessment of similarity based facial features of images with erected method so as to determines the genealogical similarity. It is based on the principle of grouping the closer features, as compared to those which are away from the predefined threshold for a better ascertainment of the extracted features. The system developed is trained using deep learning-oriented architecture incorporating these closer features for a binary classification of the subjects considered into genealogic non-genealogic. The genealogic set of data is further used to calculate the percentage of similarity with erected methods. The present work considered XX datasets from XXXX source for the assessment of facial similarities. The results portrayed an accuracy of 96.3% for genealogic data, the salient among them being those of father-daughter (98.1%), father-son(98.3%), mother-daughter(96.6%), mother-son(96.1%) genealogy in case of the datasets from “kinface W-I”. Extending this work onto “kinface W-II” set of data, the results were promising with father-daughter(98.5%), father-son(96.7%), mother-daughter(93.4%) and mother-son(98.9%) genealogy. Such an approach could be further extended to larger database so as to assess the genealogical similarity with the aid of machine-learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document