scholarly journals Ultra-fast fit-free analysis of complex fluorescence lifetime imaging via deep learning

2019 ◽  
Author(s):  
Jason T. Smith ◽  
Ruoyang Yao ◽  
Nattawut Sinsuebphon ◽  
Alena Rudkouskaya ◽  
Joseph Mazurkiewicz ◽  
...  

AbstractFluorescence lifetime imaging (FLI) provides unique quantitative information in biomedical and molecular biology studies, but relies on complex data fitting techniques to derive the quantities of interest. Herein, we propose a novel fit-free approach in FLI image formation that is based on Deep Learning (DL) to quantify complex fluorescence decays simultaneously over a whole image and at ultra-fast speeds. Our deep neural network (DNN), named FLI-Net, is designed and model-based trained to provide all lifetime-based parameters that are typically employed in the field. We demonstrate the accuracy and generalizability of FLI-Net by performing quantitative microscopic and preclinical experimental lifetime-based studies across the visible and NIR spectra, as well as across the two main data acquisition technologies. Our results demonstrate that FLI-Net is well suited to quantify complex fluorescence lifetimes, accurately, in real time in cells and intact animals without any parameter settings. Hence, it paves the way to reproducible and quantitative lifetime studies at unprecedented speeds, for improved dissemination and impact of FLI in many important biomedical applications, especially in clinical settings.

2019 ◽  
Vol 116 (48) ◽  
pp. 24019-24030 ◽  
Author(s):  
Jason T. Smith ◽  
Ruoyang Yao ◽  
Nattawut Sinsuebphon ◽  
Alena Rudkouskaya ◽  
Nathan Un ◽  
...  

Fluorescence lifetime imaging (FLI) provides unique quantitative information in biomedical and molecular biology studies but relies on complex data-fitting techniques to derive the quantities of interest. Herein, we propose a fit-free approach in FLI image formation that is based on deep learning (DL) to quantify fluorescence decays simultaneously over a whole image and at fast speeds. We report on a deep neural network (DNN) architecture, named fluorescence lifetime imaging network (FLI-Net) that is designed and trained for different classes of experiments, including visible FLI and near-infrared (NIR) FLI microscopy (FLIM) and NIR gated macroscopy FLI (MFLI). FLI-Net outputs quantitatively the spatially resolved lifetime-based parameters that are typically employed in the field. We validate the utility of the FLI-Net framework by performing quantitative microscopic and preclinical lifetime-based studies across the visible and NIR spectra, as well as across the 2 main data acquisition technologies. These results demonstrate that FLI-Net is well suited to accurately quantify complex fluorescence lifetimes in cells and, in real time, in intact animals without any parameter settings. Hence, FLI-Net paves the way to reproducible and quantitative lifetime studies at unprecedented speeds, for improved dissemination and impact of FLI in many important biomedical applications ranging from fundamental discoveries in molecular and cellular biology to clinical translation.


Author(s):  
Jason T. Smith ◽  
Ruoyang Yao ◽  
Sez-Jade Chen ◽  
Nattawut Sinsuebphon ◽  
Alena Rudkouskaya ◽  
...  

2002 ◽  
Vol 13 (11) ◽  
pp. 26 ◽  
Author(s):  
Dan Elson ◽  
Stephen Webb ◽  
Jan Siegel ◽  
Klaus Suhling ◽  
Dan Davis ◽  
...  

2020 ◽  
Author(s):  
Yuan-I Chen ◽  
Yin-Jui Chang ◽  
Shih-Chu Liao ◽  
Trung Duc Nguyen ◽  
Jianchen Yang ◽  
...  

AbstractFluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study the molecular states in the complex cellular environment as the lifetime readings are not biased by the fluorophore concentration or the excitation power. However, the current methods to generate FLIM images are either computationally intensive or unreliable when the number of photons acquired at each pixel is low. Here we introduce a new deep learning-based method termed flimGANE (fluorescence lifetime imaging based on Generative Adversarial Network Estimation) that can rapidly generate accurate and high-quality FLIM images even in the photon-starved conditions. We demonstrated our model is not only 258 times faster than the most popular time-domain least-square estimation (TD_LSE) method but also provide more accurate analysis in barcode identification, cellular structure visualization, Förster resonance energy transfer characterization, and metabolic state analysis. With its advantages in speed and reliability, flimGANE is particularly useful in fundamental biological research and clinical applications, where ultrafast analysis is critical.


Author(s):  
Marien Ochoa ◽  
Alena Rudkouskaya ◽  
Ruoyang Yao ◽  
Pingkun Yan ◽  
Margarida Barroso ◽  
...  

Acquiring dense high-dimensional optical data in biological applications remains a challenge due to the very low levels of light typically encountered. Single pixel imaging methodologies enable improved detection efficiency in such conditions but are still limited by relatively slow acquisition times. Here, we propose a Deep Learning framework, NetFLICS-CR, which enables fast hyperspectral lifetime imaging for in vivo applications at enhanced resolution, acquisition and processing speeds, without the need of experimental training datasets. NetFLICS-CR reconstructs intensity and lifetime images at 128×128 pixels over 16 spectral channels while reducing the current acquisition times from ∼2.5 hours at 50% compression to ∼3 minutes at 99% compression when using a single-pixel Hyperspectral Macroscopic Fluorescence Lifetime Imaging (HMFLI) system. The potential of the technique is demonstrated in silico, in vitro and in vivo through the monitoring of receptor-ligand interactions in mice liver and bladder and further imaging of intracellular drug delivery of the clinical drug Trastuzumab in live animals bearing HER2-positive breast tumor xenografts.


Author(s):  
Valentin Kapitany ◽  
Alex Turpin ◽  
Jamie Whitelaw ◽  
Ewan McGhee ◽  
Robert Insall ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document