scholarly journals Graph based method for cell segmentation and detection in live-cell fluorescence microscope imaging

2022 ◽  
Vol 71 ◽  
pp. 103071
Author(s):  
Katarzyna Hajdowska ◽  
Sebastian Student ◽  
Damian Borys
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fatemeh Hadaeghi ◽  
Björn-Philipp Diercks ◽  
Daniel Schetelig ◽  
Fabrizio Damicelli ◽  
Insa M. A. Wolf ◽  
...  

AbstractAdvances in high-resolution live-cell $$\hbox {Ca}^{2+}$$ Ca 2 + imaging enabled subcellular localization of early $$\hbox {Ca}^{2+}$$ Ca 2 + signaling events in T-cells and paved the way to investigate the interplay between receptors and potential target channels in $$\hbox {Ca}^{2+}$$ Ca 2 + release events. The huge amount of acquired data requires efficient, ideally automated image processing pipelines, with cell localization/segmentation as central tasks. Automated segmentation in live-cell cytosolic $$\hbox {Ca}^{2+}$$ Ca 2 + imaging data is, however, challenging due to temporal image intensity fluctuations, low signal-to-noise ratio, and photo-bleaching. Here, we propose a reservoir computing (RC) framework for efficient and temporally consistent segmentation. Experiments were conducted with Jurkat T-cells and anti-CD3 coated beads used for T-cell activation. We compared the RC performance with a standard U-Net and a convolutional long short-term memory (LSTM) model. The RC-based models (1) perform on par in terms of segmentation accuracy with the deep learning models for cell-only segmentation, but show improved temporal segmentation consistency compared to the U-Net; (2) outperform the U-Net for two-emission wavelengths image segmentation and differentiation of T-cells and beads; and (3) perform on par with the convolutional LSTM for single-emission wavelength T-cell/bead segmentation and differentiation. In turn, RC models contain only a fraction of the parameters of the baseline models and reduce the training time considerably.


Cytometry ◽  
1992 ◽  
Vol 13 (5) ◽  
pp. 453-461 ◽  
Author(s):  
Daniel E. Callahan ◽  
Amna Karim ◽  
Gemin Zheng ◽  
Paul O. P. Ts,o ◽  
Stephen A. Lesko

2021 ◽  
Author(s):  
Marc Raphael ◽  
Michael Robitaille ◽  
Jeff Byers ◽  
Joseph Christodoulides

Abstract Machine learning algorithms hold the promise of greatly improving live cell image analysis by way of (1) analyzing far more imagery than can be achieved by more traditional manual approaches and (2) by eliminating the subjective nature of researchers and diagnosticians selecting the cells or cell features to be included in the analyzed data set. Currently, however, even the most sophisticated model based or machine learning algorithms require user supervision, meaning the subjectivity problem is not removed but rather incorporated into the algorithm’s initial training steps and then repeatedly applied to the imagery. To address this roadblock, we have developed a self-supervised machine learning algorithm that recursively trains itself directly from the live cell imagery data, thus providing objective segmentation and quantification. The approach incorporates an optical flow algorithm component to self-label cell and background pixels for training, followed by the extraction of additional feature vectors for the automated generation of a cell/background classification model. Because it is self-trained, the software has no user-adjustable parameters and does not require curated training imagery. The algorithm was applied to automatically segment cells from their background for a variety of cell types and five commonly used imaging modalities - fluorescence, phase contrast, differential interference contrast (DIC), transmitted light and interference reflection microscopy (IRM). The approach is broadly applicable in that it enables completely automated cell segmentation for long-term live cell phenotyping applications, regardless of the input imagery’s optical modality, magnification or cell type.


2021 ◽  
Author(s):  
Michael C. Robitaille ◽  
Jeff M. Byers ◽  
Joseph A. Christodoulides ◽  
Marc P. Raphael

Machine learning algorithms hold the promise of greatly improving live cell image analysis by way of (1) analyzing far more imagery than can be achieved by more traditional manual approaches and (2) by eliminating the subjective nature of researchers and diagnosticians selecting the cells or cell features to be included in the analyzed data set. Currently, however, even the most sophisticated model based or machine learning algorithms require user supervision, meaning the subjectivity problem is not removed but rather incorporated into the algorithm's initial training steps and then repeatedly applied to the imagery. To address this roadblock, we have developed a self-supervised machine learning algorithm that recursively trains itself directly from the live cell imagery data, thus providing objective segmentation and quantification. The approach incorporates an optical flow algorithm component to self-label cell and background pixels for training, followed by the extraction of additional feature vectors for the automated generation of a cell/background classification model. Because it is self-trained, the software has no user-adjustable parameters and does not require curated training imagery. The algorithm was applied to automatically segment cells from their background for a variety of cell types and five commonly used imaging modalities - fluorescence, phase contrast, differential interference contrast (DIC), transmitted light and interference reflection microscopy (IRM). The approach is broadly applicable in that it enables completely automated cell segmentation for long-term live cell phenotyping applications, regardless of the input imagery's optical modality, magnification or cell type.


2007 ◽  
Vol 18 (5) ◽  
pp. 1645-1656 ◽  
Author(s):  
Marie A. Janicke ◽  
Loren Lasko ◽  
Rudolf Oldenbourg ◽  
James R. LaFountain

This study investigated the basis of meiosis II nondisjunction. Cold arrest induced a fraction of meiosis II crane fly spermatocytes to form (n + 1) and (n − 1) daughters during recovery. Live-cell liquid crystal polarized light microscope imaging showed nondisjunction was caused by chromosome malorientation. Whereas amphitely (sister kinetochore fibers to opposite poles) is normal, cold recovery induced anaphase syntely (sister fibers to the same pole) and merotely (fibers to both poles from 1 kinetochore). Maloriented chromosomes had stable metaphase positions near the equator or between the equator and a pole. Syntelics were at the spindle periphery at metaphase; their sisters disconnected at anaphase and moved all the way to a centrosome, as their strongly birefringent kinetochore fibers shortened. The kinetochore fibers of merotelics shortened little if any during anaphase, making anaphase lag common. If one fiber of a merotelic was more birefringent than the other, the less birefringent fiber lengthened with anaphase spindle elongation, often permitting inclusion of merotelics in a daughter nucleus. Meroamphitely (near amphitely but with some merotely) caused sisters to move in opposite directions. In contrast, syntely and merosyntely (near syntely but with some merotely) resulted in nondisjunction. Anaphase malorientations were more frequent after longer arrests, with particularly long arrests required to induce syntely and merosyntely.


2021 ◽  
Vol 11 (6) ◽  
pp. 2692
Author(s):  
Danny Salem ◽  
Yifeng Li ◽  
Pengcheng Xi ◽  
Hilary Phenix ◽  
Miroslava Cuperlovic-Culf ◽  
...  

Accurate and efficient segmentation of live-cell images is critical in maximizing data extraction and knowledge generation from high-throughput biology experiments. Despite recent development of deep-learning tools for biomedical imaging applications, great demand for automated segmentation tools for high-resolution live-cell microscopy images remains in order to accelerate the analysis. YeastNet dramatically improves the performance of the non-trainable classic algorithm, and performs considerably better than the current state-of-the-art yeast-cell segmentation tools. We have designed and trained a U-Net convolutional network (named YeastNet) to conduct semantic segmentation on bright-field microscopy images and generate segmentation masks for cell labeling and tracking. YeastNet enables accurate automatic segmentation and tracking of yeast cells in biomedical applications. YeastNet is freely provided with model weights as a Python package on GitHub.


Sign in / Sign up

Export Citation Format

Share Document