scholarly journals Phasetime: Deep Learning Approach to Detect Nuclei in Time Lapse Phase Images

2019 ◽  
Vol 8 (8) ◽  
pp. 1159 ◽  
Author(s):  
Pengyu Yuan ◽  
Ali Rezvan ◽  
Xiaoyang Li ◽  
Navin Varadarajan ◽  
Hien Van Nguyen

Time lapse microscopy is essential for quantifying the dynamics of cells, subcellular organelles and biomolecules. Biologists use different fluorescent tags to label and track the subcellular structures and biomolecules within cells. However, not all of them are compatible with time lapse imaging, and the labeling itself can perturb the cells in undesirable ways. We hypothesized that phase image has the requisite information to identify and track nuclei within cells. By utilizing both traditional blob detection to generate binary mask labels from the stained channel images and the deep learning Mask RCNN model to train a detection and segmentation model, we managed to segment nuclei based only on phase images. The detection average precision is 0.82 when the IoU threshold is to be set 0.5. And the mean IoU for masks generated from phase images and ground truth masks from experts is 0.735. Without any ground truth mask labels during the training time, this is good enough to prove our hypothesis. This result enables the ability to detect nuclei without the need for exogenous labeling.

2021 ◽  
Author(s):  
Sayan Kahali ◽  
Satya V.V.N. Kothapalli ◽  
Xiaojian Xu ◽  
Ulugbek S Kamilov ◽  
Dmitriy A Yablonskiy

Purpose: To introduce a Deep-Learning-Based Accelerated and Noise-Suppressed Estimation (DANSE) method for reconstructing quantitative maps of biological tissue cellular-specific, and hemodynamic-specific, from Gradient-Recalled-Echo (GRE) MRI data with multiple gradient-recalled echoes. Methods: DANSE method adapts supervised learning paradigm to train a convolutional neural network for robust estimation of and maps free from the adverse effects of macroscopic (B0) magnetic field inhomogeneities directly from the GRE magnitude images without utilizing phase images. The corresponding ground-truth maps were generated by means of a voxel-by-voxel fitting of a previously-developed biophysical quantitative GRE (qGRE) model accounting for tissue, hemodynamic and -inhomogeneities contributions to GRE signal with multiple gradient echoes using nonlinear least square (NLLS) algorithm. Results: We show that the DANSE model efficiently estimates the aforementioned brain maps and preserves all features of NLLS approach with significant improvements including noise-suppression and computation speed (from many hours to seconds). The noise-suppression feature of DANSE is especially prominent for data with SNR characteristic for typical GRE data (SNR~50), where DANSE-generated and maps had three times smaller errors than that of NLLS method. Conclusions: DANSE method enables fast reconstruction of magnetic-field-inhomogeneity-free and noise-suppressed quantitative qGRE brain maps. DANSE method does not require any information about field inhomogeneities during application. It exploits spatial patterns in the qGRE MRI data and previously-gained knowledge from the biophysical model, thus producing clean brain maps even in the environments with high noise levels. These features along with fast computational speed can lead to broad qGRE clinical and research applications.


Author(s):  
Yang Zhang ◽  
Siwa Chan ◽  
Jeon-Hor Chen ◽  
Kai-Ting Chang ◽  
Chin-Yao Lin ◽  
...  

AbstractTo develop a U-net deep learning method for breast tissue segmentation on fat-sat T1-weighted (T1W) MRI using transfer learning (TL) from a model developed for non-fat-sat images. The training dataset (N = 126) was imaged on a 1.5 T MR scanner, and the independent testing dataset (N = 40) was imaged on a 3 T scanner, both using fat-sat T1W pulse sequence. Pre-contrast images acquired in the dynamic-contrast-enhanced (DCE) MRI sequence were used for analysis. All patients had unilateral cancer, and the segmentation was performed using the contralateral normal breast. The ground truth of breast and fibroglandular tissue (FGT) segmentation was generated using a template-based segmentation method with a clustering algorithm. The deep learning segmentation was performed using U-net models trained with and without TL, by using initial values of trainable parameters taken from the previous model for non-fat-sat images. The ground truth of each case was used to evaluate the segmentation performance of the U-net models by calculating the dice similarity coefficient (DSC) and the overall accuracy based on all pixels. Pearson’s correlation was used to evaluate the correlation of breast volume and FGT volume between the U-net prediction output and the ground truth. In the training dataset, the evaluation was performed using tenfold cross-validation, and the mean DSC with and without TL was 0.97 vs. 0.95 for breast and 0.86 vs. 0.80 for FGT. When the final model developed with and without TL from the training dataset was applied to the testing dataset, the mean DSC was 0.89 vs. 0.83 for breast and 0.81 vs. 0.81 for FGT, respectively. Application of TL not only improved the DSC, but also decreased the required training case number. Lastly, there was a high correlation (R2 > 0.90) for both the training and testing datasets between the U-net prediction output and ground truth for breast volume and FGT volume. U-net can be applied to perform breast tissue segmentation on fat-sat images, and TL is an efficient strategy to develop a specific model for each different dataset.


2021 ◽  
Vol 9 ◽  
Author(s):  
Kehua Zhang ◽  
Miaomiao Zhu ◽  
Lihong Ma ◽  
Jiaheng Zhang ◽  
Yong Li

In white-light diffraction phase imaging, when used with insufficient spatial filtering, phase image exhibits object-dependent artifacts, especially around the edges of the object, referred to the well-known halo effect. Here we present a new deep-learning-based approach for recovering halo-free white-light diffraction phase images. The neural network-based method can accurately and rapidly remove the halo artifacts not relying on any priori knowledge. First, the neural network, namely HFDNN (deep neural network for halo free), is designed. Then, the HFDNN is trained by using pairs of the measured phase images, acquired by white-light diffraction phase imaging system, and the true phase images. After the training, the HFDNN takes a measured phase image as input to rapidly correct the halo artifacts and reconstruct an accurate halo-free phase image. We validate the effectiveness and the robustness of the method by correcting the phase images on various samples, including standard polystyrene beads, living red blood cells and monascus spores and hyphaes. In contrast to the existing halo-free methods, the proposed HFDNN method does not rely on the hardware design or does not need iterative computations, providing a new avenue to all halo-free white-light phase imaging techniques.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
A Le ◽  
I Miyatsuka ◽  
J Otsuki ◽  
M Shiotani ◽  
N Enatsu ◽  
...  

Abstract Study question Can deep learning (DL) algorithms trained on time-lapse videos be used to detect and track the size and gender of pronuclei in developing human zygotes? Summary answer Our DL algorithm not only outperforms state-of-the-art models in detecting the pronuclei but can also accurately identify and track its gender and size over time. What is known already Recent researches have explored the use of DL to extract key morphological features of human embryos. Existing studies, however, focus either on blastocysts’ morphological measurements (Au et al. 2020) or on embryos’ general developmental stages classification (Gingold et al. 2018, Liu et al. 2019, Lau et al. 2019). So far, only one paper attempted to evaluate zygotes’ morphological components but stopped short of identifying the existence and location of their pronuclei (Leahy et al. 2020). We address this research gap by training a DL model that can detect, classify the gender, and quantify the size of zygotes’ pronuclei over time. Study design, size, duration A retrospective analysis using 91 fertilized oocytes from infertile patients undergoing IVF or ICSI treatment at Hanabusa Women’s Clinic between January 2011 and August 2019 was conducted. Each embryo was time-lapse monitored using Vitrolife which records an image every 15 minutes at 7 focal planes. For our study, we used videos of the first 1–2 days of the embryo from its 3 central focal planes, corresponding to 70–150 images per focal plane. Participants/materials, setting, methods All 273 timelapse videos were split into 30,387 grayscale still images at a 15-minute interval. Each image was checked and annotated by experienced embryologists where every pixel of the image was classified into 3 categories: male pronuclei, female pronuclei, and others. Images were converted into grayscale, resized into 500x500 pixels, and then fed into a neural network with the Mask R-CNN architecture and a ResNet101 backbone to produce a pronuclei instance segmentation model. Main results and the role of chance The 91 embryos were split into training (∼70% or 63 embryos) and validation (∼30% or 28 embryos). Our pronuclei model takes as input a single image and outputs a bounding box, mask, category, confidence score, and size measured in terms of pixel for each detected candidate. For prediction, we run the model on the 3 middle focal planes and merge candidates by using the one with the highest confidence score. We used the mean-average precision (mAP) score to evaluate our model’s ability to detect pronuclei and used the mean absolute percentage error (MAPE) between the actual size (as annotated by the embryologist) and the predicted one to check the model’s performance in tracking the pronuclei’s size. The mAP for detecting pronuclei, regardless of its gender, achieved by our model was 0.698, higher than the 0.680 value reported in the Leahy et al. paper (2020). Breakdown by gender, our model’s mAP for male and female pronuclei are 0.734 and 0.661 respectively. The overall MAPE for tracking pronuclei’s size is 21.8%. Breakdown by gender, our model’s MAPE for male and female pronuclei are 19.4% and 24.3% respectively. Limitations, reasons for caution Samples were collected from one clinic with videos recorded from one time-lapse system which can limit our results’ reproducibility. The accuracy of our DL model is also limited by the small number of embryos that we used. Wider implications of the findings: Even with a limited training dataset, our results indicate that we can accurately detect and track the gender and the size of zygotes’ pronuclei using time-lapse videos. In future models, we will increase our training dataset as well as include other time-lapse systems to improve our models’ accuracy and reproducibility. Trial registration number Not applicable


1998 ◽  
Vol 4 (2) ◽  
pp. 146-157 ◽  
Author(s):  
Y.C. Wang ◽  
T.M. Chou ◽  
M. Libera ◽  
E. Voelkl ◽  
B.G. Frost

This study describes the use of transmission electron holography to determine the mean inner potential of polystyrene. Spherical nanoparticles of amorphous polystyrene are studied so that the effect of specimen thickness on the phase shift of an incident electron wave can be separated from the intrinsic refractive properties of the specimen. A recursive four-parameter χ-squared minimization routine is developed to determine the sphere center, radius, and mean inner potential (Φ0) at each pixel in the phase image. Because of the large number of pixels involved, the statistics associated with determining a single Φ0 value characteristic of a given sphere are quite good. Simulated holograms show that the holographic reconstruction procedure and the χ-squared analysis method are robust. Averaging the Φ0 data derived from ten phase images from ten different polystyrene spheres gives a value of Φ0PS = 8.5 V (σ) = 0.7 V). Specimen charging and electron-beam damage, if present, affect the measurement at a level below the current precision of the experiment.


2017 ◽  
Author(s):  
Philippe Poulin ◽  
Marc-Alexandre Côté ◽  
Jean-Christophe Houde ◽  
Laurent Petit ◽  
Peter F. Neher ◽  
...  

AbstractWe show that deep learning techniques can be applied successfully to fiber tractography. Specifically, we use feed-forward and recurrent neural networks to learn the generation process of streamlines directly from diffusion-weighted imaging (DWI) data. Furthermore, we empirically study the behavior of the proposed models on a realistic white matter phantom with known ground truth. We show that their performance is competitive to that of commonly used techniques, even when the models are used on DWI data unseen at training time. We also show that our models are able to recover high spatial coverage of the ground truth white matter pathways while better controlling the number of false connections. In fact, our experiments suggest that exploiting past information within a streamline's trajectory during tracking helps predict the following direction.


Author(s):  
Mohamed Sayed Farag ◽  
Mostafa Mohamed Mohie El Din ◽  
Hassan Ahmed Elshenbary

<span>Due to the increase in number of cars and slow city developments, there is a need for smart parking system. One of the main issues in smart parking systems is parking lot occupancy status classification, so this paper introduce two methods for parking lot classification. The first method uses the mean, after converting the colored image to grayscale, then to black/white. If the mean is greater than a given threshold it is classified as occupied, otherwise it is empty. This method gave 90% correct classification rate on cnrall database. It overcome the alexnet deep learning method trained and tested on the same database (the mean method has no training time). The second method, which depends on deep learning is a deep learning neural network consists of 11 layers, trained and tested on the same database. It gave 93% correct classification rate, when trained on cnrall and tested on the same database. As shown, this method overcome the alexnet deep learning and the mean methods on the same database. On the Pklot database the alexnet and our deep learning network have a close resutls, overcome <br /> the mean method (greater than 95%).</span>


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 430 ◽  
Author(s):  
Michael G. Endres ◽  
Florian Hillen ◽  
Marios Salloumis ◽  
Ahmad R. Sedaghat ◽  
Stefan M. Niehues ◽  
...  

Periapical radiolucencies, which can be detected on panoramic radiographs, are one of the most common radiographic findings in dentistry and have a differential diagnosis including infections, granuloma, cysts and tumors. In this study, we seek to investigate the ability with which 24 oral and maxillofacial (OMF) surgeons assess the presence of periapical lucencies on panoramic radiographs, and we compare these findings to the performance of a predictive deep learning algorithm that we have developed using a curated data set of 2902 de-identified panoramic radiographs. The mean diagnostic positive predictive value (PPV) of OMF surgeons based on their assessment of panoramic radiographic images was 0.69 (±0.13), indicating that dentists on average falsely diagnose 31% of cases as radiolucencies. However, the mean diagnostic true positive rate (TPR) was 0.51 (±0.14), indicating that on average 49% of all radiolucencies were missed. We demonstrate that the deep learning algorithm achieves a better performance than 14 of 24 OMF surgeons within the cohort, exhibiting an average precision of 0.60 (±0.04), and an F1 score of 0.58 (±0.04) corresponding to a PPV of 0.67 (±0.05) and TPR of 0.51 (±0.05). The algorithm, trained on limited data and evaluated on clinically validated ground truth, has potential to assist OMF surgeons in detecting periapical lucencies on panoramic radiographs.


Acta Naturae ◽  
2016 ◽  
Vol 8 (3) ◽  
pp. 88-96
Author(s):  
Yu. K. Doronin ◽  
I. V. Senechkin ◽  
L. V. Hilkevich ◽  
M. A. Kurcer

In order to estimate the diversity of embryo cleavage relatives to embryo progress (blastocyst formation), time-lapse imaging data of preimplantation human embryo development were used. This retrospective study is focused on the topographic features and time parameters of the cleavages, with particular emphasis on the lengths of cleavage cycles and the genealogy of blastomeres in 2- to 8-cell human embryos. We have found that all 4-cell human embryos have four developmental variants that are based on the sequence of appearance and orientation of cleavage planes during embryo cleavage from 2 to 4 blastomeres. Each variant of cleavage shows a strong correlation with further developmental dynamics of the embryos (different cleavage cycle characteristics as well as lengths of blastomere cycles). An analysis of the sequence of human blastomere divisions allowed us to postulate that the effects of zygotic determinants are eliminated as a result of cleavage, and that, thereafter, blastomeres acquire the ability of own syntheses, regulation, polarization, formation of functional contacts, and, finally, of specific differentiation. This data on the early development of human embryos obtained using noninvasive methods complements and extend our understanding of the embryogenesis of eutherian mammals and may be applied in the practice of reproductive technologies.


Sign in / Sign up

Export Citation Format

Share Document