scholarly journals BCM3D 2.0: Accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations

2021 ◽  
Author(s):  
Ji Zhang ◽  
Yibo Wang ◽  
Eric Donarski ◽  
Andreas Gahlmann

Accurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for measuring individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with every increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is completely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately later, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time, which opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Mingxing Zhang ◽  
Ji Zhang ◽  
Yibo Wang ◽  
Jie Wang ◽  
Alecia M. Achimovich ◽  
...  

AbstractFluorescence microscopy enables spatial and temporal measurements of live cells and cellular communities. However, this potential has not yet been fully realized for investigations of individual cell behaviors and phenotypic changes in dense, three-dimensional (3D) bacterial biofilms. Accurate cell detection and cellular shape measurement in densely packed biofilms are challenging because of the limited resolution and low signal to background ratios (SBRs) in fluorescence microscopy images. In this work, we present Bacterial Cell Morphometry 3D (BCM3D), an image analysis workflow that combines deep learning with mathematical image analysis to accurately segment and classify single bacterial cells in 3D fluorescence images. In BCM3D, deep convolutional neural networks (CNNs) are trained using simulated biofilm images with experimentally realistic SBRs, cell densities, labeling methods, and cell shapes. We systematically evaluate the segmentation accuracy of BCM3D using both simulated and experimental images. Compared to state-of-the-art bacterial cell segmentation approaches, BCM3D consistently achieves higher segmentation accuracy and further enables automated morphometric cell classifications in multi-population biofilms.


2020 ◽  
Author(s):  
Mingxing Zhang ◽  
Ji Zhang ◽  
Yibo Wang ◽  
Jie Wang ◽  
Alecia M. Achimovich ◽  
...  

AbstractFluorescence microscopy enables spatial and temporal measurements of live cells and cellular communities. However, this potential has not yet been fully realized for investigations of individual cell behaviors and phenotypic changes in dense, three-dimensional (3D) bacterial biofilms. Accurate cell detection and cellular shape measurement in densely packed biofilms are challenging because of the limited resolution and low signal to background ratios (SBRs) in fluorescence microscopy images. In this work, we present Bacterial Cell Morphometry 3D (BCM3D), an image analysis workflow that combines deep learning with mathematical image analysis to accurately segment and classify single bacterial cells in 3D fluorescence images. In BCM3D, deep convolutional neural networks (CNNs) are trained using simulated biofilm images with experimentally realistic SBRs, cell densities, labeling methods, and cell shapes. We systematically evaluate the segmentation accuracy of BCM3D using both simulated and experimental images. Compared to state-of-the-art bacterial cell segmentation approaches, BCM3D consistently achieves higher segmentation accuracy and further enables automated morphometric cell classifications in multi-population biofilms.


2013 ◽  
Vol 79 (7) ◽  
pp. 2294-2301 ◽  
Author(s):  
Konstantinos P. Koutsoumanis ◽  
Alexandra Lianou

ABSTRACTConventional bacterial growth studies rely on large bacterial populations without considering the individual cells. Individual cells, however, can exhibit marked behavioral heterogeneity. Here, we present experimental observations on the colonial growth of 220 individual cells ofSalmonella entericaserotype Typhimurium using time-lapse microscopy videos. We found a highly heterogeneous behavior. Some cells did not grow, showing filamentation or lysis before division. Cells that were able to grow and form microcolonies showed highly diverse growth dynamics. The quality of the videos allowed for counting the cells over time and estimating the kinetic parameters lag time (λ) and maximum specific growth rate (μmax) for each microcolony originating from a single cell. To interpret the observations, the variability of the kinetic parameters was characterized using appropriate probability distributions and introduced to a stochastic model that allows for taking into account heterogeneity using Monte Carlo simulation. The model provides stochastic growth curves demonstrating that growth of single cells or small microbial populations is a pool of events each one of which has its own probability to occur. Simulations of the model illustrated how the apparent variability in population growth gradually decreases with increasing initial population size (N0). For bacterial populations withN0of >100 cells, the variability is almost eliminated and the system seems to behave deterministically, even though the underlying law is stochastic. We also used the model to demonstrate the effect of the presence and extent of a nongrowing population fraction on the stochastic growth of bacterial populations.


2020 ◽  
Vol 84 (4) ◽  
Author(s):  
Alexander Cambré ◽  
Abram Aertsen

SUMMARY The rise in fluorescence-based imaging techniques over the past 3 decades has improved the ability of researchers to scrutinize live cell biology at increased spatial and temporal resolution. In microbiology, these real-time vivisections structurally changed the view on the bacterial cell away from the “watery bag of enzymes” paradigm toward the perspective that these organisms are as complex as their eukaryotic counterparts. Capitalizing on the enormous potential of (time-lapse) fluorescence microscopy and the ever-extending pallet of corresponding probes, initial breakthroughs were made in unraveling the localization of proteins and monitoring real-time gene expression. However, later it became clear that the potential of this technique extends much further, paving the way for a focus-shift from observing single events within bacterial cells or populations to obtaining a more global picture at the intra- and intercellular level. In this review, we outline the current state of the art in fluorescence-based vivisection of bacteria and provide an overview of important case studies to exemplify how to use or combine different strategies to gain detailed information on the cell’s physiology. The manuscript therefore consists of two separate (but interconnected) parts that can be read and consulted individually. The first part focuses on the fluorescent probe pallet and provides a perspective on modern methodologies for microscopy using these tools. The second section of the review takes the reader on a tour through the bacterial cell from cytoplasm to outer shell, describing strategies and methods to highlight architectural features and overall dynamics within cells.


2021 ◽  
Author(s):  
Owen M. O'Connor ◽  
Razan N. Alnahhas ◽  
Jean-Baptiste Lugagne ◽  
Mary Dunlop

Improvements in microscopy software and hardware have dramatically increased the pace of image acquisition, making analysis a major bottleneck in generating quantitative, single-cell data. Although tools for segmenting and tracking bacteria within time-lapse images exist, most require human input, are specialized to the experimental set up, or lack accuracy. Here, we introduce DeLTA 2.0, a purely Python workflow that can rapidly and accurately analyze single cells on two-dimensional surfaces to quantify gene expression and cell growth. The algorithm uses deep convolutional neural networks to extract single-cell information from time-lapse images, requiring no human input after training. DeLTA 2.0 retains all the functionality of the original version, which was optimized for bacteria growing in the mother machine microfluidic device, but extends results to two-dimensional growth environments. Two-dimensional environments represent an important class of data because they are more straightforward to implement experimentally, they offer the potential for studies using co-cultures of cells, and they can be used to quantify spatial effects and multi-generational phenomena. However, segmentation and tracking are significantly more challenging tasks in two-dimensions due to exponential increases in the number of cells that must be tracked. To showcase this new functionality, we analyze mixed populations of antibiotic resistant and susceptible cells, and also track pole age and growth rate across generations. In addition to the two-dimensional capabilities, we also introduce several major improvements to the code that increase accessibility, including the ability to accept many standard microscopy file formats and arbitrary image sizes as inputs. DeLTA 2.0 is rapid, with run times of less than 10 minutes for complete movies with hundreds of cells, and is highly accurate, with error rates around 1%, making it a powerful tool for analyzing time-lapse microscopy data.


2019 ◽  
Author(s):  
Jean-Baptiste Lugagne ◽  
Haonan Lin ◽  
Mary J. Dunlop

AbstractMicroscopy image analysis is a major bottleneck in quantification of single-cell microscopy data, typically requiring human supervision and curation, which limit both accuracy and throughput. To address this, we developed a deep learning-based image analysis pipeline that performs segmentation, tracking, and lineage reconstruction. Our analysis focuses on time-lapse movies of Escherichia coli cells trapped in a “mother machine” microfluidic device, a scalable platform for long-term single-cell analysis that is widely used in the field. While deep learning has been applied to cell segmentation problems before, our approach is fundamentally innovative in that it also uses machine learning to perform cell tracking and lineage reconstruction. With this framework we are able to get high fidelity results (1% error rate), without human supervision. Further, the algorithm is fast, with complete analysis of a typical frame containing ∼150 cells taking <700msec. The framework is not constrained to a particular experimental set up and has the potential to generalize to time-lapse images of other organisms or different experimental configurations. These advances open the door to a myriad of applications including real-time tracking of gene expression and high throughput analysis of strain libraries at single-cell resolution.Author SummaryAutomated microscopy experiments can generate massive data sets, allowing for detailed analysis of cell physiology and properties such as gene expression. In particular, dynamic measurements of gene expression with time-lapse microscopy have proved invaluable for understanding how gene regulatory networks operate. However, image analysis remains a key bottleneck in the analysis pipeline, typically requiring human supervision and a posteriori processing. Recently, machine learning-based approaches have ushered in a new era of rapid, unsupervised image analysis. In this work, we use and repurpose the U-Net deep learning algorithm to develop an image processing pipeline that can not only accurately identify the location of cells in an image, but also track them over time as they grow and divide. As an application, we focus on multi-hour time-lapse movies of bacteria growing in a microfluidic device. Our algorithm is accurate and fast, with error rates near 1% and requiring less than a second to analyze a typical movie frame. This increase in speed and fidelity has the potential to open new experimental avenues, e.g. where images are analyzed on-the-fly so that experimental conditions can be updated in real time.


2021 ◽  
Author(s):  
Francesco Padovani ◽  
Benedikt Mairhoermann ◽  
Pascal Falter-Braun ◽  
Jette Lengefeld ◽  
Kurt M Schmoller

Live-cell imaging is a powerful tool to study dynamic cellular processes on the level of single cells with quantitative detail. Microfluidics enables parallel high-throughput imaging, creating a downstream bottleneck at the stage of data analysis. Recent progress on deep learning image analysis dramatically improved cell segmentation and tracking. Nevertheless, manual data validation and correction is typically still required and broadly used tools spanning the complete range of live-cell imaging analysis, from cell segmentation to pedigree analysis and signal quantification, are still needed. Here, we present Cell-ACDC, a user-friendly graphical user-interface (GUI)-based framework written in Python, for segmentation, tracking and cell cycle annotation. We included two state-of-the-art and high-accuracy deep learning models for single-cell segmentation of yeast and mammalian cells implemented in the most used deep learning frameworks TensorFlow and PyTorch. Additionally, we developed and implemented a cell tracking method and embedded it into an intuitive, semi-automated workflow for label-free cell cycle annotation of single cells. The open-source and modularized nature of Cell-ACDC will enable simple and fast integration of new deep learning-based and traditional methods for cell segmentation or downstream image analysis. Source code: https://github.com/SchmollerLab/Cell_ACDC


2021 ◽  
Author(s):  
Elliott D. SoRelle ◽  
Scott White ◽  
Benjamin B. Yellen ◽  
Kris C. Wood ◽  
Micah A. Luftig ◽  
...  

AbstractAppropriately tailored segmentation techniques can extract detailed quantitative information from biological image datasets to characterize and better understand sample distributions. Practically, high-resolution characterization of biological samples such as cell populations can provide insights into the sources of variance in biomarker expression, drug resistance, and other phenotypic aspects, but it is still unclear what is the best method for extracting this information from large image-based datasets. We present a software pipeline and comparison of multiple image segmentation methods to extract single-cell morphological and fluorescence quantitation from time lapse images of clonal growth rates using a recently reported microfluidic system. The inputs in all pipelines consist of thousands of unprocessed images and the outputs are the detection of cell counts, chamber identifiers, and individual morphological properties of each clone over time detected through multi-channel fluorescence and bright field imaging. Our conclusion is that unsupervised learning methods for cell segmentation substantially outperform supervised statistical methods with respect to accuracy and have key advantages including individual cell instance detection and flexibility through model training. We expect this system and software to have broad utility for researchers interested in high-throughput single-cell biology.


2021 ◽  
Author(s):  
Georgeos Hardo ◽  
Somenath Bakshi

Abstract Stochastic gene expression causes phenotypic heterogeneity in a population of genetically identical bacterial cells. Such non-genetic heterogeneity can have important consequences for the population fitness, and therefore cells implement regulation strategies to either suppress or exploit such heterogeneity to adapt to their circumstances. By employing time-lapse microscopy of single cells, the fluctuation dynamics of gene expression may be analysed, and their regulatory mechanisms thus deciphered. However, a careful consideration of the experimental design and data-analysis is needed to produce useful data for deriving meaningful insights from them. In the present paper, the individual steps and challenges involved in a time-lapse experiment are discussed, and a rigorous framework for designing, performing, and extracting single-cell gene expression dynamics data from such experiments is outlined.


Microscopy ◽  
2019 ◽  
Vol 68 (4) ◽  
pp. 338-341 ◽  
Author(s):  
Kohki Konishi ◽  
Masafumi Mimura ◽  
Takao Nonaka ◽  
Ichiro Sase ◽  
Hideo Nishioka ◽  
...  

Abstract Segmentation of three-dimensional (3D) electron microscopy (EM) image stacks is an arduous and tedious task. Deep convolutional neural networks (CNNs) work well to automate the segmentation; however, they require a large training dataset, which is a major impediment. In order to solve this issue, especially for sparse segmentation, we used a CNN with a minimal training dataset. We segmented a Cerebellar Purkinje cell from an image stack of a mouse Cerebellum cortex in less than two working days, which is much shorter than that of the conventional method. We concluded that we can reduce the total labor time for the sparse segmentation by reducing the training dataset.


Sign in / Sign up

Export Citation Format

Share Document