scholarly journals 834 A robust deep learning approach for precisely segmenting cells in multiplex tissue images

2021 ◽  
Vol 9 (Suppl 3) ◽  
pp. A875-A875
Author(s):  
Daniel Winkowski ◽  
Jeni Caldara ◽  
Brit Boehmer ◽  
Regan Baird

BackgroundMultiplex images are becoming pivotal in tissue pathology because they provide positional location and multidimensional phenotype of every cell. The heterogeneity of cells, morphologies, and densities makes the identification of the millions of cells in a tissue slice challenging. There is an urgent need for a robust, yet flexible, algorithm to automatically demarcate each cell that accurately defines cellular boundaries. We have developed a method to extend a DL nuclear identification algorithm beyond the nucleus and to the outer boundary of the cell using biological signals from multiplex panels.MethodsAll image analysis was performed in the Visiopharm image analysis platform. Three human observers provided ground truth (GT) annotations by outlining cells in predefined areas each containing ~30 cells in six different images from two different multiplex instruments: mIF = 8-plex via Vectra Polaris from Akoya and IMC = 13-plex via Hyperion from Fluidigm. Images were subsequently segmented by different AI methods: Machine Learning Nuclear Detection (ML), Deep Learning Nuclear Detection (DL), and DL that incorporates biological signals (DL+). Each set of computer-generated annotations was compared to GT using common evaluation metrics DICE, Precision and Sensitivity.ResultsOverall, we found a high degree of concordance between the computer-generated and human annotations (DICE = 0.73±0.08, n=12) and between imaging modalities (mIF: 0.76±0.07; IMC: 0.71±0.08; n=6). Comparison of DICE scores for the AI methods indicated a superior delineation of cell boundaries using the DL+ method (DL+: 0.79±0.07; ML: 0.74±0.08; DL: 0.74±0.03;). Precision, which compares true vs false positive annotated regions to GT, was also high for all images (0.77±0.11) (mIF: 0.76±0.10; IMC: 0.78±0.11). Sensitivity, which compares true positives vs false negative annotated regions GT, was also high for all images (0.77±0.09) (mIF: 0.76±0.09; IMC: 0.79±0.09).ConclusionsWe developed a flexible DL based strategy that enables the most comprehensive segmentation of cells in multiplex tissue images. Each AI approach shows a high concordance with segmentation annotations from human observers as measured by the industry standards DICE, Precision and Sensitivity. The DL+ method did achieve the highest DICE score indicating a more accurate delineation of cell boundaries. Expectedly, precision and sensitivity metrics are similar between all methods while DICE Coefficient better accounts for the annotations at the cell edge. The DL+ cell segmentation algorithm will yield an improved accuracy when phenotyping cells in downstream analysis as the precise biomarker composition is more accurately contained within each cell.

2019 ◽  
Author(s):  
Jean-Baptiste Lugagne ◽  
Haonan Lin ◽  
Mary J. Dunlop

AbstractMicroscopy image analysis is a major bottleneck in quantification of single-cell microscopy data, typically requiring human supervision and curation, which limit both accuracy and throughput. To address this, we developed a deep learning-based image analysis pipeline that performs segmentation, tracking, and lineage reconstruction. Our analysis focuses on time-lapse movies of Escherichia coli cells trapped in a “mother machine” microfluidic device, a scalable platform for long-term single-cell analysis that is widely used in the field. While deep learning has been applied to cell segmentation problems before, our approach is fundamentally innovative in that it also uses machine learning to perform cell tracking and lineage reconstruction. With this framework we are able to get high fidelity results (1% error rate), without human supervision. Further, the algorithm is fast, with complete analysis of a typical frame containing ∼150 cells taking <700msec. The framework is not constrained to a particular experimental set up and has the potential to generalize to time-lapse images of other organisms or different experimental configurations. These advances open the door to a myriad of applications including real-time tracking of gene expression and high throughput analysis of strain libraries at single-cell resolution.Author SummaryAutomated microscopy experiments can generate massive data sets, allowing for detailed analysis of cell physiology and properties such as gene expression. In particular, dynamic measurements of gene expression with time-lapse microscopy have proved invaluable for understanding how gene regulatory networks operate. However, image analysis remains a key bottleneck in the analysis pipeline, typically requiring human supervision and a posteriori processing. Recently, machine learning-based approaches have ushered in a new era of rapid, unsupervised image analysis. In this work, we use and repurpose the U-Net deep learning algorithm to develop an image processing pipeline that can not only accurately identify the location of cells in an image, but also track them over time as they grow and divide. As an application, we focus on multi-hour time-lapse movies of bacteria growing in a microfluidic device. Our algorithm is accurate and fast, with error rates near 1% and requiring less than a second to analyze a typical movie frame. This increase in speed and fidelity has the potential to open new experimental avenues, e.g. where images are analyzed on-the-fly so that experimental conditions can be updated in real time.


2020 ◽  
Vol 12 (15) ◽  
pp. 2345 ◽  
Author(s):  
Ahram Song ◽  
Yongil Kim ◽  
Youkyung Han

Object-based image analysis (OBIA) is better than pixel-based image analysis for change detection (CD) in very high-resolution (VHR) remote sensing images. Although the effectiveness of deep learning approaches has recently been proved, few studies have investigated OBIA and deep learning for CD. Previously proposed methods use the object information obtained from the preprocessing and postprocessing phase of deep learning. In general, they use the dominant or most frequently used label information with respect to all the pixels inside an object without considering any quantitative criteria to integrate the deep learning network and object information. In this study, we developed an object-based CD method for VHR satellite images using a deep learning network to denote the uncertainty associated with an object and effectively detect the changes in an area without the ground truth data. The proposed method defines the uncertainty associated with an object and mainly includes two phases. Initially, CD objects were generated by unsupervised CD methods, and the objects were used to train the CD network comprising three-dimensional convolutional layers and convolutional long short-term memory layers. The CD objects were updated according to the uncertainty level after the learning process was completed. Further, the updated CD objects were considered as the training data for the CD network. This process was repeated until the entire area was classified into two classes, i.e., change and no-change, with respect to the object units or defined epoch. The experiments conducted using two different VHR satellite images confirmed that the proposed method achieved the best performance when compared with the performances obtained using the traditional CD approaches. The method was less affected by salt and pepper noise and could effectively extract the region of change in object units without ground truth data. Furthermore, the proposed method can offer advantages associated with unsupervised CD methods and a CD network subjected to postprocessing by effectively utilizing the deep learning technique and object information.


2021 ◽  
Author(s):  
Francesco Padovani ◽  
Benedikt Mairhoermann ◽  
Pascal Falter-Braun ◽  
Jette Lengefeld ◽  
Kurt M Schmoller

Live-cell imaging is a powerful tool to study dynamic cellular processes on the level of single cells with quantitative detail. Microfluidics enables parallel high-throughput imaging, creating a downstream bottleneck at the stage of data analysis. Recent progress on deep learning image analysis dramatically improved cell segmentation and tracking. Nevertheless, manual data validation and correction is typically still required and broadly used tools spanning the complete range of live-cell imaging analysis, from cell segmentation to pedigree analysis and signal quantification, are still needed. Here, we present Cell-ACDC, a user-friendly graphical user-interface (GUI)-based framework written in Python, for segmentation, tracking and cell cycle annotation. We included two state-of-the-art and high-accuracy deep learning models for single-cell segmentation of yeast and mammalian cells implemented in the most used deep learning frameworks TensorFlow and PyTorch. Additionally, we developed and implemented a cell tracking method and embedded it into an intuitive, semi-automated workflow for label-free cell cycle annotation of single cells. The open-source and modularized nature of Cell-ACDC will enable simple and fast integration of new deep learning-based and traditional methods for cell segmentation or downstream image analysis. Source code: https://github.com/SchmollerLab/Cell_ACDC


2021 ◽  
Author(s):  
Kareem Wahid ◽  
Sara Ahmed ◽  
Renjie He ◽  
Lisanne van Dijk ◽  
Jonas Teuwen ◽  
...  

Background and Purpose: Oropharyngeal cancer (OPC) primary gross tumor volume (GTVp) segmentation is crucial for radiotherapy. Multiparametric MRI (mpMRI) is increasingly used for OPC adaptive radiotherapy but relies on manual segmentation. Therefore, we constructed mpMRI deep learning (DL) OPC GTVp auto-segmentation models and determined the impact of input channels on segmentation performance. Materials and Methods: GTVp ground truth segmentations were manually generated for 30 OPC patients from a clinical trial. We evaluated five mpMRI input channels (T2, T1, ADC, Ktrans, Ve). 3D Residual U-net models were developed and assessed using leave-one-out cross-validation. A baseline T2 model was compared to mpMRI models (T2+T1, T2+ADC, T2+Ktrans, T2+Ve, all 5 channels [ALL]) primarily using the Dice similarity coefficient (DSC). Sensitivity, positive predictive value, Hausdorff distance (HD), false-negative DSC (FND), false-positive DSC, surface DSC, 95% HD, and mean surface distance were also assessed. For the best model, ground truth and DL-generated segmentations were compared through a Turing test using physician observers. Results: Models yielded mean DSCs from 0.71 (ALL) to 0.73 (T2+T1). Compared to the T2 model, performance was significantly improved for HD, FND, sensitivity, surface DSC, and 95% HD for the T2+T1 model (p<0.05) and for FND for the T2+Ve and ALL models (p<0.05). There were no differences between ground truth and DL-generated segmentations for all observers (p>0.05). Conclusion: DL using mpMRI provides high-quality segmentations of OPC GTVp. Incorporating additional mpMRI channels may increase the performance of certain evaluation metrics. This pilot study is a promising step towards fully automated MR-guided OPC radiotherapy.


2020 ◽  
Vol 77 (4) ◽  
pp. 1609-1622
Author(s):  
Franziska Mathies ◽  
Catharina Lange ◽  
Anja Mäurer ◽  
Ivayla Apostolova ◽  
Susanne Klutmann ◽  
...  

Background: Positron emission tomography (PET) of the brain with 2-[F-18]-fluoro-2-deoxy-D-glucose (FDG) is widely used for the etiological diagnosis of clinically uncertain cognitive impairment (CUCI). Acute full-blown delirium can cause reversible alterations of FDG uptake that mimic neurodegenerative disease. Objective: This study tested whether delirium in remission affects the performance of FDG PET for differentiation between neurodegenerative and non-neurodegenerative etiology of CUCI. Methods: The study included 88 patients (82.0±5.7 y) with newly detected CUCI during hospitalization in a geriatric unit. Twenty-seven (31%) of the patients were diagnosed with delirium during their current hospital stay, which, however, at time of enrollment was in remission so that delirium was not considered the primary cause of the CUCI. Cases were categorized as neurodegenerative or non-neurodegenerative etiology based on visual inspection of FDG PET. The diagnosis at clinical follow-up after ≥12 months served as ground truth to evaluate the diagnostic performance of FDG PET. Results: FDG PET was categorized as neurodegenerative in 51 (58%) of the patients. Follow-up after 16±3 months was obtained in 68 (77%) of the patients. The clinical follow-up diagnosis confirmed the FDG PET-based categorization in 60 patients (88%, 4 false negative and 4 false positive cases with respect to detection of neurodegeneration). The fraction of correct PET-based categorization did not differ between patients with delirium in remission and patients without delirium (86% versus 89%, p = 0.666). Conclusion: Brain FDG PET is useful for the etiological diagnosis of CUCI in hospitalized geriatric patients, as well as in patients with delirium in remission.


Author(s):  
Dinesh Pothineni ◽  
Martin R. Oswald ◽  
Jan Poland ◽  
Marc Pollefeys
Keyword(s):  

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Uzair Khan ◽  
Sidike Paheding ◽  
Colin Elkin ◽  
Vijay Devabhaktuni

Sign in / Sign up

Export Citation Format

Share Document