Vesseg: An Open-Source Tool for Deep Learning-Based Atherosclerotic Plaque Quantification in Histopathology Image

Author(s):  
J.M. Murray ◽  
P. Pfeffer ◽  
R. Seifert ◽  
A. Hermann ◽  
J. Handke ◽  
...  

Objective: Manual plaque segmentation in microscopy images is a time-consuming process in atherosclerosis research and potentially subject to unacceptable user-to-user variability and observer bias. We address this by releasing Vesseg a tool that includes state-of-the-art deep learning models for atherosclerotic plaque segmentation. Approach and Results: Vesseg is a containerized, extensible, open-source, and user-oriented tool. It includes 2 models, trained and tested on 1089 hematoxylin-eosin stained mouse model atherosclerotic brachiocephalic artery sections. The models were compared to 3 human raters. Vesseg can be accessed at https://vesseg .online or downloaded. The models show mean Soerensen-Dice scores of 0.91±0.15 for plaque and 0.97±0.08 for lumen pixels. The mean accuracy is 0.98±0.05. Vesseg is already in active use, generating time savings of >10 minutes per slide. Conclusions: Vesseg brings state-of-the-art deep learning methods to atherosclerosis research, providing drastic time savings, while allowing for continuous improvement of models and the underlying pipeline.

2018 ◽  
Author(s):  
Jianxu Chen ◽  
Liya Ding ◽  
Matheus P. Viana ◽  
HyeonWoo Lee ◽  
M. Filip Sluezwski ◽  
...  

A continuing challenge in quantitative cell biology is the accurate and robust 3D segmentation of structures of interest from fluorescence microscopy images in an automated, reproducible, and widely accessible manner for subsequent interpretable data analysis. We describe the Allen Cell and Structure Segmenter (Segmenter), a Python-based open source toolkit developed for 3D segmentation of cells and intracellular structures in fluorescence microscope images. This toolkit brings together classic image segmentation and iterative deep learning workflows first to generate initial high-quality 3D intracellular structure segmentations and then to easily curate these results to generate the ground truths for building robust and accurate deep learning models. The toolkit takes advantage of the high-replicate 3D live cell image data collected at the Allen Institute for Cell Science of over 30 endogenous fluorescently tagged human induced pluripotent stem cell (hiPSC) lines. Each cell line represents a different intracellular structure with one or more distinct localization patterns within undifferentiated hiPS cells and hiPSC-derivedcardiomyocytes. The Segmenter consists of two complementary elements, a classic image segmentation workflow with a restricted set of algorithms and parameters and an iterative deep learning segmentation workflow. We created a collection of 20 classic image segmentation workflows based on 20 distinct and representative intracellular structure localization patterns as a "lookup table" reference and starting point for users. The iterative deep learning workflow can take over when the classic segmentation workflow is insufficient. Two straightforward "human-in-the loop" curation strategies convert a set of classic image segmentation workflow results into a set of 3D ground truth images for iterative model training without the need for manual painting in 3D. The deep learning model architectures used in this toolkit were designed and tested specifically for 3D fluorescence microscope images and implemented as readable scripts. The Segmenter thus leverages state of the art computer vision algorithms in an accessible way to facilitate their application by the experimental biology researcher. We include two useful applications to demonstrate how we used the classic image segmentation and iterative deep learning workflows to solve more challenging 3D segmentation tasks. First, we introduce the "Training Assay" approach, a new experimental-computational co-design concept to generate more biologically accurate segmentation ground truths. We combined the iterative deep learning workflow with three Training Assays to develop a robust, scalable cell and nuclear instance segmentation algorithm, which could achieve accurate target segmentation for over 98% of individual cells and over 80% of entire fields of view. Second, we demonstrate how to extend the lamin B1 segmentation model built from the iterative deep learning workflow to obtain more biologically accurate lamin B1 segmentation by utilizing multi-channel inputs and combining multiple ML models. The steps and workflows used to develop these algorithms are generalizable to other similar segmentation challenges. More information, including tutorials and code repositories, are available at allencell.org/segmenter.


2020 ◽  
Author(s):  
Yanhua Gao ◽  
Yuan Zhu ◽  
Bo Liu ◽  
Yue Hu ◽  
Youmin Guo

ObjectiveIn Transthoracic echocardiographic (TTE) examination, it is essential to identify the cardiac views accurately. Computer-aided recognition is expected to improve the accuracy of the TTE examination.MethodsThis paper proposes a new method for automatic recognition of cardiac views based on deep learning, including three strategies. First, A spatial transform network is performed to learn cardiac shape changes during the cardiac cycle, which reduces intra-class variability. Second, a channel attention mechanism is introduced to adaptively recalibrates channel-wise feature responses. Finally, unlike conventional deep learning methods, which learned each input images individually, the structured signals are applied by a graph of similarities among images. These signals are transformed into the graph-based image embedding, which act as unsupervised regularization constraints to improve the generalization accuracy.ResultsThe proposed method was trained and tested in 171792 cardiac images from 584 subjects. Compared with the known result of the state of the art, the overall accuracy of the proposed method on cardiac image classification is 99.10% vs. 91.7%, and the mean AUC is 99.36%. Moreover, the overall accuracy is 98.15%, and the mean AUC is 98.96% on an independent test set with 34211 images from 100 subjects.ConclusionThe method of this paper achieved the results of the state of the art, which is expected to be an automated recognition tool for cardiac views recognition. The work confirms the potential of deep learning on ultrasound medicine.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Estibaliz Gómez-de-Mariscal ◽  
Martin Maška ◽  
Anna Kotrbová ◽  
Vendula Pospíchalová ◽  
Pavel Matula ◽  
...  

Abstract Small extracellular vesicles (sEVs) are cell-derived vesicles of nanoscale size (~30–200 nm) that function as conveyors of information between cells, reflecting the cell of their origin and its physiological condition in their content. Valuable information on the shape and even on the composition of individual sEVs can be recorded using transmission electron microscopy (TEM). Unfortunately, sample preparation for TEM image acquisition is a complex procedure, which often leads to noisy images and renders automatic quantification of sEVs an extremely difficult task. We present a completely deep-learning-based pipeline for the segmentation of sEVs in TEM images. Our method applies a residual convolutional neural network to obtain fine masks and use the Radon transform for splitting clustered sEVs. Using three manually annotated datasets that cover a natural variability typical for sEV studies, we show that the proposed method outperforms two different state-of-the-art approaches in terms of detection and segmentation performance. Furthermore, the diameter and roundness of the segmented vesicles are estimated with an error of less than 10%, which supports the high potential of our method in biological applications.


2021 ◽  
Author(s):  
Sravya Sravya ◽  
Andriy Miranskyy ◽  
Ayse Bener

Software Bug Localization involves a significant amount of time and effort on the part of the software developer. Many state-of-the-art bug localization models have been proposed in the past, to help developers localize bugs easily. However, none of these models meet the adoption thresholds of the software practitioner. Recently some deep learning-based models have been proposed, that have been shown to perform better than the state-of-the-art models. With this motivation, we experiment on Convolution Neural Networks (CNNs) to examine their effectiveness in localizing bugs. We also train a SimpleLogistic model as a baseline model for our experiments. We train both our models on five open source Java projects and compare their performance across the projects. Our experiments show that the CNN models perform better than the SimpleLogistic models in most of the cases, but do not meet the adoption criteria set by the practitioners.


Development ◽  
2021 ◽  
Vol 148 (18) ◽  
Author(s):  
Adrien Hallou ◽  
Hannah G. Yevick ◽  
Bianca Dumitrascu ◽  
Virginie Uhlmann

ABSTRACT Deep learning has transformed the way large and complex image datasets can be processed, reshaping what is possible in bioimage analysis. As the complexity and size of bioimage data continues to grow, this new analysis paradigm is becoming increasingly ubiquitous. In this Review, we begin by introducing the concepts needed for beginners to understand deep learning. We then review how deep learning has impacted bioimage analysis and explore the open-source resources available to integrate it into a research project. Finally, we discuss the future of deep learning applied to cell and developmental biology. We analyze how state-of-the-art methodologies have the potential to transform our understanding of biological systems through new image-based analysis and modelling that integrate multimodal inputs in space and time.


Over the past few years, Deep learning-based methods have shown encouraging and inspiring results for one of the most complex tasks of computer vision and image processing; Image Inpainting. The difficulty of image inpainting is derived from its’ need to fully and deeply understand of the structure and texture of images for producing accurate and visibly plausible results especially for the cases of inpainting a relatively larger region. Deep learning methods usually employ convolution neural network (CNN) for processing and analyzing images using filters that consider all image pixels as valid ones and usually use the mean value to substitute the missing pixels. This result in artifacts and blurry inpainted regions inconsistent with the rest of the image. In this paper, a new novel-based method is proposed for image inpainting of random-shaped missing regions with variable size and arbitrary locations across the image. We employed the use of dilated convolutions for composing multiscale context information without any loss in resolution as well as including a modification mask step after each convolution operation. The proposed method also includes a global discriminator that also considers the scale of patches as well as the whole image. The global discriminator is responsible for capturing local continuity of images texture as well as the overall global images’ features. The performance of the proposed method is evaluated using two datasets (Places2 and Paris Street View). Also, a comparison with the recent state-of-the-art is preformed to demonstrate and prove the effectiveness of our model in both qualitative and quantitative evaluations.


2016 ◽  
Author(s):  
Nick Pawlowski ◽  
Juan C Caicedo ◽  
Shantanu Singh ◽  
Anne E Carpenter ◽  
Amos Storkey

AbstractMorphological profiling aims to create signatures of genes, chemicals and diseases from microscopy images. Current approaches use classical computer vision-based segmentation and feature extraction. Deep learning models achieve state-of-the-art performance in many computer vision tasks such as classification and segmentation. We propose to transfer activation features of generic deep convolutional networks to extract features for morphological profiling. Our approach surpasses currently used methods in terms of accuracy and processing speed. Furthermore, it enables fully automated processing of microscopy images without need for single cell identification.


2021 ◽  
Author(s):  
Sravya Sravya ◽  
Andriy Miranskyy ◽  
Ayse Bener

Software Bug Localization involves a significant amount of time and effort on the part of the software developer. Many state-of-the-art bug localization models have been proposed in the past, to help developers localize bugs easily. However, none of these models meet the adoption thresholds of the software practitioner. Recently some deep learning-based models have been proposed, that have been shown to perform better than the state-of-the-art models. With this motivation, we experiment on Convolution Neural Networks (CNNs) to examine their effectiveness in localizing bugs. We also train a SimpleLogistic model as a baseline model for our experiments. We train both our models on five open source Java projects and compare their performance across the projects. Our experiments show that the CNN models perform better than the SimpleLogistic models in most of the cases, but do not meet the adoption criteria set by the practitioners.


2021 ◽  
Vol 32 (9) ◽  
pp. 823-829
Author(s):  
Alice M. Lucas ◽  
Pearl V. Ryder ◽  
Bin Li ◽  
Beth A. Cimini ◽  
Kevin W. Eliceiri ◽  
...  

Microscopy images are rich in information about the dynamic relationships among biological structures. However, extracting this complex information can be challenging, especially when biological structures are closely packed, distinguished by texture rather than intensity, and/or low intensity relative to the background. By learning from large amounts of annotated data, deep learning can accomplish several previously intractable bioimage analysis tasks. Until the past few years, however, most deep-learning workflows required significant computational expertise to be applied. Here, we survey several new open-source software tools that aim to make deep-learning–based image segmentation accessible to biologists with limited computational experience. These tools take many different forms, such as web apps, plug-ins for existing imaging analysis software, and preconfigured interactive notebooks and pipelines. In addition to surveying these tools, we overview several challenges that remain in the field. We hope to expand awareness of the powerful deep-learning tools available to biologists for image analysis.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sign in / Sign up

Export Citation Format

Share Document