scholarly journals Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images

2020 ◽  
Vol 15 (1) ◽  
Author(s):  
Danielle J. Fassler ◽  
Shahira Abousamra ◽  
Rajarsi Gupta ◽  
Chao Chen ◽  
Maozheng Zhao ◽  
...  

Abstract Background Multiplex immunohistochemistry (mIHC) permits the labeling of six or more distinct cell types within a single histologic tissue section. The classification of each cell type requires detection of uniquely colored chromogens localized to cells expressing biomarkers of interest. The most comprehensive and reproducible method to evaluate such slides is to employ digital pathology and image analysis pipelines to whole-slide images (WSIs). Our suite of deep learning tools quantitatively evaluates the expression of six biomarkers in mIHC WSIs. These methods address the current lack of readily available methods to evaluate more than four biomarkers and circumvent the need for specialized instrumentation to spectrally separate different colors. The use case application for our methods is a study that investigates tumor immune interactions in pancreatic ductal adenocarcinoma (PDAC) with a customized mIHC panel. Methods Six different colored chromogens were utilized to label T-cells (CD3, CD4, CD8), B-cells (CD20), macrophages (CD16), and tumor cells (K17) in formalin-fixed paraffin-embedded (FFPE) PDAC tissue sections. We leveraged pathologist annotations to develop complementary deep learning-based methods: (1) ColorAE is a deep autoencoder which segments stained objects based on color; (2) U-Net is a convolutional neural network (CNN) trained to segment cells based on color, texture and shape; and (3) ensemble methods that employ both ColorAE and U-Net, collectively referred to as ColorAE:U-Net. We assessed the performance of our methods using: structural similarity and DICE score to evaluate segmentation results of ColorAE against traditional color deconvolution; F1 score, sensitivity, positive predictive value, and DICE score to evaluate the predictions from ColorAE, U-Net, and ColorAE:U-Net ensemble methods against pathologist-generated ground truth. We then used prediction results for spatial analysis (nearest neighbor). Results We observed that (1) the performance of ColorAE is comparable to traditional color deconvolution for single-stain IHC images (note: traditional color deconvolution cannot be used for mIHC); (2) ColorAE and U-Net are complementary methods that detect six different classes of cells with comparable performance; (3) combinations of ColorAE and U-Net in ensemble methods outperform ColorAE and U-Net alone; and (4) ColorAE:U-Net ensemble methods can be employed for detailed analysis of the tumor microenvironment (TME). Summary We developed a suite of scalable deep learning methods to analyze 6 distinctly labeled cell populations in mIHC WSIs. We evaluated our methods and found that they reliably detected and classified cells in the PDAC tumor microenvironment. We also utilized the ColorAE:U-Net ensemble method to analyze 3 mIHC WSIs with nearest neighbor spatial analysis. We demonstrate a proof of concept that these methods can be employed to quantitatively describe the spatial distribution of immune cells within the tumor microenvironment. These complementary deep learning methods are readily deployable for use in clinical research studies.

Author(s):  
Oleksandr Dudin ◽  
◽  
Ozar Mintser ◽  
Oksana Sulaieva ◽  
◽  
...  

Introduction. Over the past few decades, thanks to advances in algorithm development, the introduction of available computing power, and the management of large data sets, machine learning methods have become active in various fields of life. Among them, deep learning possesses a special place, which is used in many spheres of health care and is an integral part and prerequisite for the development of digital pathology. Objectives. The purpose of the review was to gather the data on existing image analysis technologies and machine learning tools developed for the whole-slide digital images in pathology. Methods: Analysis of the literature on machine learning methods used in pathology, staps of automated image analysis, types of neural networks, their application and capabilities in digital pathology was performed. Results. To date, a wide range of deep learning strategies have been developed, which are actively used in digital pathology, and demonstrated excellent diagnostic accuracy. In addition to diagnostic solutions, the integration of artificial intelligence into the practice of pathomorphological laboratory provides new tools for assessing the prognosis and prediction of sensitivity to different treatments. Conclusions: The synergy of artificial intelligence and digital pathology is a key tool to improve the accuracy of diagnostics, prognostication and personalized medicine facilitation


2021 ◽  
Vol 7 (3) ◽  
pp. 51
Author(s):  
Emanuela Paladini ◽  
Edoardo Vantaggiato ◽  
Fares Bougourzi ◽  
Cosimo Distante ◽  
Abdenour Hadid ◽  
...  

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.


2021 ◽  
Vol 8 ◽  
Author(s):  
Edwin Roger Parra

Image analysis using multiplex immunofluorescence (mIF) to detect different proteins in a single tissue section has revolutionized immunohistochemical methods in recent years. With mIF, individual cell phenotypes, as well as different cell subpopulations and even rare cell populations, can be identified with extraordinary fidelity according to the expression of antibodies in an mIF panel. This technology therefore has an important role in translational oncology studies and probably will be incorporated in the clinic. The expression of different biomarkers of interest can be examined at the tissue or individual cell level using mIF, providing information about cell phenotypes, distribution of cells, and cell biological processes in tumor samples. At present, the main challenge in spatial analysis is choosing the most appropriate method for extracting meaningful information about cell distribution from mIF images for analysis. Thus, knowing how the spatial interaction between cells in the tumor encodes clinical information is important. Exploratory analysis of the location of the cell phenotypes using point patterns of distribution is used to calculate metrics summarizing the distances at which cells are processed and the interpretation of those distances. Various methods can be used to analyze cellular distribution in an mIF image, and several mathematical functions can be applied to identify the most elemental relationships between the spatial analysis of cells in the image and established patterns of cellular distribution in tumor samples. The aim of this review is to describe the characteristics of mIF image analysis at different levels, including spatial distribution of cell populations and cellular distribution patterns, that can increase understanding of the tumor microenvironment.


10.29007/qlvr ◽  
2018 ◽  
Author(s):  
Alan Mosca ◽  
George Magoulas

This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methodsin Deep Learning.


2020 ◽  
Vol 14 (4) ◽  
pp. 470-487
Author(s):  
Shujian Deng ◽  
Xin Zhang ◽  
Wen Yan ◽  
Eric I-Chao Chang ◽  
Yubo Fan ◽  
...  

Author(s):  
Byron Smith ◽  
Meyke Hermsen ◽  
Elizabeth Lesser ◽  
Deepak Ravichandar ◽  
Walter Kremers

Abstract Deep learning has pushed the scope of digital pathology beyond simple digitization and telemedicine. The incorporation of these algorithms in routine workflow is on the horizon and maybe a disruptive technology, reducing processing time, and increasing detection of anomalies. While the newest computational methods enjoy much of the press, incorporating deep learning into standard laboratory workflow requires many more steps than simply training and testing a model. Image analysis using deep learning methods often requires substantial pre- and post-processing order to improve interpretation and prediction. Similar to any data processing pipeline, images must be prepared for modeling and the resultant predictions need further processing for interpretation. Examples include artifact detection, color normalization, image subsampling or tiling, removal of errant predictions, etc. Once processed, predictions are complicated by image file size – typically several gigabytes when unpacked. This forces images to be tiled, meaning that a series of subsamples from the whole-slide image (WSI) are used in modeling. Herein, we review many of these methods as they pertain to the analysis of biopsy slides and discuss the multitude of unique issues that are part of the analysis of very large images.


2021 ◽  
Author(s):  
Lydia Kienbaum ◽  
Miguel Correa Abondano ◽  
Raul H. Blas Sevillano ◽  
Karl J Schmid

Background: Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNN) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. Results: Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy (r=0.99). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average RGB values for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10-20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3,449 images of 2,484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. Conclusions: Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.


Sign in / Sign up

Export Citation Format

Share Document