Neural network strategies for plasma membrane selection in quantitative fluorescence microscopy images

Author(s):  
Daniel Wirth ◽  
Alec McCall ◽  
Kalina Hristova
Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 22
Author(s):  
Yannis Kalaidzidis ◽  
Hernán Morales-Navarrete ◽  
Inna Kalaidzidis ◽  
Marino Zerial

Fluorescently targeted proteins are widely used for studies of intracellular organelles dynamic. Peripheral proteins are transiently associated with organelles and a significant fraction of them are located at the cytosol. Image analysis of peripheral proteins poses a problem on properly discriminating membrane-associated signal from the cytosolic one. In most cases, signals from organelles are compact in comparison with diffuse signal from cytosol. Commonly used methods for background estimation depend on the assumption that background and foreground signals are separable by spatial frequency filters. However, large non-stained organelles (e.g., nuclei) result in abrupt changes in the cytosol intensity and lead to errors in the background estimation. Such mistakes result in artifacts in the reconstructed foreground signal. We developed a new algorithm that estimates background intensity in fluorescence microscopy images and does not produce artifacts on the borders of nuclei.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Luzhe Huang ◽  
Hanlong Chen ◽  
Yilin Luo ◽  
Yair Rivenson ◽  
Aydogan Ozcan

AbstractVolumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.


2021 ◽  
Vol 120 (3) ◽  
pp. 360a
Author(s):  
Rayna M. Addabbo ◽  
John Kohler ◽  
Isaac Angert ◽  
Yan Chen ◽  
Heather Hanson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document