scholarly journals Deep learning of virus infections reveals mechanics of lytic cells

2019 ◽  
Author(s):  
Vardan Andriasyan ◽  
Artur Yakimovich ◽  
Fanny Georgi ◽  
Anthony Petkidis ◽  
Robert Witte ◽  
...  

Imaging across scales gives insight into disease mechanisms in organisms, tissues and cells. Yet, rare infection phenotypes, such as virus-induced cell lysis have remained difficult to study. Here, we developed fixed and live cell imaging modalities and a deep learning approach to identify herpesvirus and adenovirus infections in the absence of virus-specific stainings. Procedures comprises staining of infected nuclei with DNA-dyes, fluorescence microscopy, and validation by virus-specific live-cell imaging. Deep learning of multi-round infection phenotypes identified hallmarks of adenovirus-infected cell nuclei. At an accuracy of >95%, the procedure predicts two distinct infection outcomes 20 hours prior to lysis, nonlytic (nonspreading) and lytic (spreading) infections. Phenotypic prediction and live-cell imaging revealed a faster enrichment of GFP-tagged virion proteins in lytic compared to nonlytic infected nuclei, and distinct mechanics of lytic and nonlytic nuclei upon laser-induced ruptures. The results unleash the power of deep learning based prediction in unraveling rare infection phenotypes.

2019 ◽  
Author(s):  
Erick Moen ◽  
Enrico Borba ◽  
Geneva Miller ◽  
Morgan Schwartz ◽  
Dylan Bannon ◽  
...  

AbstractLive-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained on those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org.


2016 ◽  
Vol 12 (11) ◽  
pp. e1005177 ◽  
Author(s):  
David A. Van Valen ◽  
Takamasa Kudo ◽  
Keara M. Lane ◽  
Derek N. Macklin ◽  
Nicolas T. Quach ◽  
...  

2009 ◽  
Vol 185 (1) ◽  
pp. 21-26 ◽  
Author(s):  
Christoffel Dinant ◽  
Martijn S. Luijsterburg ◽  
Thomas Höfer ◽  
Gesa von Bornstaedt ◽  
Wim Vermeulen ◽  
...  

Live-cell imaging studies aided by mathematical modeling have provided unprecedented insight into assembly mechanisms of multiprotein complexes that control genome function. Such studies have unveiled emerging properties of chromatin-associated systems involved in DNA repair and transcription.


2021 ◽  
Author(s):  
Francesco Padovani ◽  
Benedikt Mairhoermann ◽  
Pascal Falter-Braun ◽  
Jette Lengefeld ◽  
Kurt M Schmoller

Live-cell imaging is a powerful tool to study dynamic cellular processes on the level of single cells with quantitative detail. Microfluidics enables parallel high-throughput imaging, creating a downstream bottleneck at the stage of data analysis. Recent progress on deep learning image analysis dramatically improved cell segmentation and tracking. Nevertheless, manual data validation and correction is typically still required and broadly used tools spanning the complete range of live-cell imaging analysis, from cell segmentation to pedigree analysis and signal quantification, are still needed. Here, we present Cell-ACDC, a user-friendly graphical user-interface (GUI)-based framework written in Python, for segmentation, tracking and cell cycle annotation. We included two state-of-the-art and high-accuracy deep learning models for single-cell segmentation of yeast and mammalian cells implemented in the most used deep learning frameworks TensorFlow and PyTorch. Additionally, we developed and implemented a cell tracking method and embedded it into an intuitive, semi-automated workflow for label-free cell cycle annotation of single cells. The open-source and modularized nature of Cell-ACDC will enable simple and fast integration of new deep learning-based and traditional methods for cell segmentation or downstream image analysis. Source code: https://github.com/SchmollerLab/Cell_ACDC


2017 ◽  
Author(s):  
Chuangqi Wang ◽  
Xitong Zhang ◽  
Hee June Choi ◽  
Bolun Lin ◽  
Yudong Yu ◽  
...  

AbstractQuantitative live cell imaging has been widely used to study various dynamical processes in cell biology. Phase contrast microscopy is a popular imaging modality for live cell imaging since it does not require labeling and cause any phototoxicity to live cells. However, phase contrast images have posed significant challenges for accurate image segmentation due to complex image features. Fluorescence live cell imaging has also been used to monitor the dynamics of specific molecules in live cells. But unlike immunofluorescence imaging, fluorescence live cell images are highly prone to noise, low contrast, and uneven illumination. These often lead to erroneous cell segmentation, hindering quantitative analyses of dynamical cellular processes. Although deep learning has been successfully applied in image segmentation by automatically learning hierarchical features directly from raw data, it typically requires large datasets and high computational cost to train deep neural networks. These make it challenging to apply deep learning in routine laboratory settings. In this paper, we evaluate a deep learning-based segmentation pipeline for time-lapse live cell movies, which uses small efforts to prepare the training set by leveraging the temporal coherence of time-lapse image sequences. We train deep neural networks using a small portion of images in the movies, and then predict cell edges for the entire image frames of the same movies. To further increase segmentation accuracy using small numbers of training frames, we integrate VGG16 pretrained model with the U-Net structure (VGG16-U-Net) for neural network training. Using live cell movies from phase contrast, Total Internal Reflection Fluorescence (TIRF), and spinning disk confocal microscopes, we demonstrate that the labeling of cell edges in small portions (5∼10%) can provide enough training data for the deep learning segmentation. Particularly, VGG16-U-Net produces significantly more accurate segmentation than U-Net by increasing the recall performance. We expect that our deep learning segmentation pipeline will facilitate quantitative analyses of challenging high-resolution live cell movies.


Sign in / Sign up

Export Citation Format

Share Document