scholarly journals Deep Convolutional Neural Networks Enable Discrimination of Heterogeneous Digital Pathology Images

2017 ◽  
Author(s):  
Pegah Khosravi ◽  
Ehsan Kazemi ◽  
Marcin Imielinski ◽  
Olivier Elemento ◽  
Iman Hajirasouliha

Pathological evaluation of tumor tissue is pivotal for diagnosis in cancer patients and automated image analysis approaches have great potential to increase precision of diagnosis and help reduce human error. In this study, we utilize various computational methods based on convolutional neural networks (CNN) and build a stand-alone pipeline to effectively classify different histopathology images across different types of cancer. In particular, we demonstrate the utility of our pipeline to discriminate between two subtypes of lung cancer, four biomarkers of bladder cancer, and five biomarkers of breast cancer. In addition, we apply our pipeline to discriminate among four immunohistochemistry (IHC) staining scores of bladder and breast cancers. Our classification pipeline utilizes a basic architecture of CNN, Google's Inceptions within three training strategies, and an ensemble of two state-of-the-art algorithms, Inception and ResNet. These strategies include training the last layer of Google's Inceptions, training the network from scratch, and fine-tunning the parameters for our data using two pre-trained version of Google's Inception architectures, Inception-V1 and Inception-V3. We demonstrate the power of deep learning approaches for identifying cancer subtypes, and the robustness of Google's Inceptions even in presence of extensive tumor heterogeneity. Our pipeline on average achieved accuracies of 100% , 92%, 95%, and 69% for discrimination of various cancer types, subtypes, biomarkers, and scores, respectively. Our pipeline and related documentation is freely available at https://github.com/ih-lab/CNN_Smoothie

2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Kai Kiwitz ◽  
Christian Schiffer ◽  
Hannah Spitzer ◽  
Timo Dickscheid ◽  
Katrin Amunts

AbstractThe distribution of neurons in the cortex (cytoarchitecture) differs between cortical areas and constitutes the basis for structural maps of the human brain. Deep learning approaches provide a promising alternative to overcome throughput limitations of currently used cytoarchitectonic mapping methods, but typically lack insight as to what extent they follow cytoarchitectonic principles. We therefore investigated in how far the internal structure of deep convolutional neural networks trained for cytoarchitectonic brain mapping reflect traditional cytoarchitectonic features, and compared them to features of the current grey level index (GLI) profile approach. The networks consisted of a 10-block deep convolutional architecture trained to segment the primary and secondary visual cortex. Filter activations of the networks served to analyse resemblances to traditional cytoarchitectonic features and comparisons to the GLI profile approach. Our analysis revealed resemblances to cellular, laminar- as well as cortical area related cytoarchitectonic features. The networks learned filter activations that reflect the distinct cytoarchitecture of the segmented cortical areas with special regard to their laminar organization and compared well to statistical criteria of the GLI profile approach. These results confirm an incorporation of relevant cytoarchitectonic features in the deep convolutional neural networks and mark them as a valid support for high-throughput cytoarchitectonic mapping workflows.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 256
Author(s):  
Francesco Ponzio ◽  
Gianvito Urgese ◽  
Elisa Ficarra ◽  
Santa Di Cataldo

Thanks to their capability to learn generalizable descriptors directly from images, deep Convolutional Neural Networks (CNNs) seem the ideal solution to most pattern recognition problems. On the other hand, to learn the image representation, CNNs need huge sets of annotated samples that are unfeasible in many every-day scenarios. This is the case, for example, of Computer-Aided Diagnosis (CAD) systems for digital pathology, where additional challenges are posed by the high variability of the cancerous tissue characteristics. In our experiments, state-of-the-art CNNs trained from scratch on histological images were less accurate and less robust to variability than a traditional machine learning framework, highlighting all the issues of fully training deep networks with limited data from real patients. To solve this problem, we designed and compared three transfer learning frameworks, leveraging CNNs pre-trained on non-medical images. This approach obtained very high accuracy, requiring much less computational resource for the training. Our findings demonstrate that transfer learning is a solution to the automated classification of histological samples and solves the problem of designing accurate and computationally-efficient CAD systems with limited training data.


2019 ◽  
Vol 8 (6) ◽  
pp. 258 ◽  
Author(s):  
Yu Feng ◽  
Frank Thiemann ◽  
Monika Sester

Cartographic generalization is a problem, which poses interesting challenges to automation. Whereas plenty of algorithms have been developed for the different sub-problems of generalization (e.g., simplification, displacement, aggregation), there are still cases, which are not generalized adequately or in a satisfactory way. The main problem is the interplay between different operators. In those cases the human operator is the benchmark, who is able to design an aesthetic and correct representation of the physical reality. Deep learning methods have shown tremendous success for interpretation problems for which algorithmic methods have deficits. A prominent example is the classification and interpretation of images, where deep learning approaches outperform traditional computer vision methods. In both domains-computer vision and cartography-humans are able to produce good solutions. A prerequisite for the application of deep learning is the availability of many representative training examples for the situation to be learned. As this is given in cartography (there are many existing map series), the idea in this paper is to employ deep convolutional neural networks (DCNNs) for cartographic generalizations tasks, especially for the task of building generalization. Three network architectures, namely U-net, residual U-net and generative adversarial network (GAN), are evaluated both quantitatively and qualitatively in this paper. They are compared based on their performance on this task at target map scales 1:10,000, 1:15,000 and 1:25,000, respectively. The results indicate that deep learning models can successfully learn cartographic generalization operations in one single model in an implicit way. The residual U-net outperforms the others and achieved the best generalization performance.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Rahib H. Abiyev ◽  
Mohammad Khaleel Sallam Ma’aitah

Chest diseases are very serious health problems in the life of people. These diseases include chronic obstructive pulmonary disease, pneumonia, asthma, tuberculosis, and lung diseases. The timely diagnosis of chest diseases is very important. Many methods have been developed for this purpose. In this paper, we demonstrate the feasibility of classifying the chest pathologies in chest X-rays using conventional and deep learning approaches. In the paper, convolutional neural networks (CNNs) are presented for the diagnosis of chest diseases. The architecture of CNN and its design principle are presented. For comparative purpose, backpropagation neural networks (BPNNs) with supervised learning, competitive neural networks (CpNNs) with unsupervised learning are also constructed for diagnosis chest diseases. All the considered networks CNN, BPNN, and CpNN are trained and tested on the same chest X-ray database, and the performance of each network is discussed. Comparative results in terms of accuracy, error rate, and training time between the networks are presented.


2017 ◽  
Vol 63 (12) ◽  
pp. 1847-1855 ◽  
Author(s):  
Thomas J S Durant ◽  
Eben M Olson ◽  
Wade L Schulz ◽  
Richard Torres

Abstract BACKGROUND Morphologic profiling of the erythrocyte population is a widely used and clinically valuable diagnostic modality, but one that relies on a slow manual process associated with significant labor cost and limited reproducibility. Automated profiling of erythrocytes from digital images by capable machine learning approaches would augment the throughput and value of morphologic analysis. To this end, we sought to evaluate the performance of leading implementation strategies for convolutional neural networks (CNNs) when applied to classification of erythrocytes based on morphology. METHODS Erythrocytes were manually classified into 1 of 10 classes using a custom-developed Web application. Using recent literature to guide architectural considerations for neural network design, we implemented a “very deep” CNN, consisting of >150 layers, with dense shortcut connections. RESULTS The final database comprised 3737 labeled cells. Ensemble model predictions on unseen data demonstrated a harmonic mean of recall and precision metrics of 92.70% and 89.39%, respectively. Of the 748 cells in the test set, 23 misclassification errors were made, with a correct classification frequency of 90.60%, represented as a harmonic mean across the 10 morphologic classes. CONCLUSIONS These findings indicate that erythrocyte morphology profiles could be measured with a high degree of accuracy with “very deep” CNNs. Further, these data support future efforts to expand classes and optimize practical performance in a clinical environment as a prelude to full implementation as a clinical tool.


EBioMedicine ◽  
2018 ◽  
Vol 27 ◽  
pp. 317-328 ◽  
Author(s):  
Pegah Khosravi ◽  
Ehsan Kazemi ◽  
Marcin Imielinski ◽  
Olivier Elemento ◽  
Iman Hajirasouliha

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document