scholarly journals Automatic Segmentation and Cardiac Mechanics Analysis of Evolving Zebrafish Using Deep Learning

2021 ◽  
Vol 8 ◽  
Author(s):  
Bohan Zhang ◽  
Kristofor E. Pas ◽  
Toluwani Ijaseun ◽  
Hung Cao ◽  
Peng Fei ◽  
...  

Background: In the study of early cardiac development, it is essential to acquire accurate volume changes of the heart chambers. Although advanced imaging techniques, such as light-sheet fluorescent microscopy (LSFM), provide an accurate procedure for analyzing the heart structure, rapid, and robust segmentation is required to reduce laborious time and accurately quantify developmental cardiac mechanics.Methods: The traditional biomedical analysis involving segmentation of the intracardiac volume occurs manually, presenting bottlenecks due to enormous data volume at high axial resolution. Our advanced deep-learning techniques provide a robust method to segment the volume within a few minutes. Our U-net-based segmentation adopted manually segmented intracardiac volume changes as training data and automatically produced the other LSFM zebrafish cardiac motion images.Results: Three cardiac cycles from 2 to 5 days postfertilization (dpf) were successfully segmented by our U-net-based network providing volume changes over time. In addition to understanding each of the two chambers' cardiac function, the ventricle and atrium were separated by 3D erode morphology methods. Therefore, cardiac mechanical properties were measured rapidly and demonstrated incremental volume changes of both chambers separately. Interestingly, stroke volume (SV) remains similar in the atrium while that of the ventricle increases SV gradually.Conclusion: Our U-net-based segmentation provides a delicate method to segment the intricate inner volume of the zebrafish heart during development, thus providing an accurate, robust, and efficient algorithm to accelerate cardiac research by bypassing the labor-intensive task as well as improving the consistency in the results.

2021 ◽  
Author(s):  
Bohan Zhang ◽  
Kristofor Pas ◽  
Toluwani Ijaseun ◽  
Hung Cao ◽  
Peng Fei ◽  
...  

AbstractObjectiveIn the study of early cardiac development, it is important to acquire accurate volume changes of the heart chambers. Although advanced imaging techniques, such as light-sheet fluorescent microscopy (LSFM), provide an accurate procedure for analyzing the structure of the heart, rapid and robust segmentation is required to reduce laborious time and accurately quantify developmental cardiac mechanics.MethodsThe traditional biomedical analysis involving segmentation of the intracardiac volume is usually carried out manually, presenting bottlenecks due to enormous data volume at high axial resolution. Our advanced deep-learning techniques provide a robust method to segment the volume within a few minutes. Our U-net based segmentation adopted manually segmented intracardiac volume changes as training data and produced the other LSFM zebrafish cardiac motion images automatically.ResultsThree cardiac cycles from 2 days post fertilization (dpf) to 5 dpf were successfully segmented by our U-net based network providing volume changes over time. In addition to understanding the cardiac function for each of the two chambers, the ventricle and atrium were separated by 3D erode morphology methods. Therefore, cardiac mechanical properties were measured rapidly and demonstrated incremental volume changes of both chambers separately. Interestingly, stroke volume (SV) remains similar in the atrium while that of the ventricle increases SV gradually.ConclusionOur U-net based segmentation provides a delicate method to segment the intricate inner volume of zebrafish heart during development; thus providing an accurate, robust and efficient algorithm to accelerate cardiac research by bypassing the labor-intensive task as well as improving the consistency in the results.


2020 ◽  
Author(s):  
Nils Wagner ◽  
Fynn Beuttenmueller ◽  
Nils Norlin ◽  
Jakob Gierten ◽  
Juan Carlos Boffi ◽  
...  

Light-field microscopy (LFM) has emerged as a powerful tool for fast volumetric image acquisition in biology, but its effective throughput and widespread use has been hampered by a computationally demanding and artefact-prone image reconstruction process. Here, we present a novel framework consisting of a hybrid light-field light-sheet microscope and deep learning-based volume reconstruction, where single light-sheet acquisitions continuously serve as training data and validation for the convolutional neural network reconstructing the LFM volume. Our network delivers high-quality reconstructions at video-rate throughput and we demonstrate the capabilities of our approach by imaging medaka heart dynamics and zebrafish neural activity.


Author(s):  
Elisabeth C. Kugler ◽  
Andrik Rampun ◽  
Timothy J.A. Chico ◽  
Paul A. Armitage

AbstractLight sheet fluorescent microscopy allows imaging of zebrafish vascular development in great detail. However, interpretation of data often relies on visual assessment and approaches to validate image analysis steps are broadly lacking. Here, we compare different enhancement and segmentation approaches to extract the zebrafish cerebral vasculature, provide comprehensive validation, study segmentation robustness, examine sensitivity, apply the validated method to quantify embryonic cerebrovascular volume, and examine applicability to different transgenic reporter lines. The best performing segmentation method was used to train different deep learning networks for segmentation. We found that U-Net based architectures outperform SegNet. While there was a slight overestimation of vascular volume using the U-Net methodologies, variances were low, suggesting that sensitivity to biological changes would still be obtained.HighlightsGeneral filtering is less applicable than Sato enhancement to enhance zebrafish cerebral vessels.Biological data sets help to overcome the lack of segmentation gold-standards and phantom models.Sato enhancement followed by Otsu thresholding is highly accurate, robust, and sensitive.Direct generalization of the segmentation approach to transgenics, other than the one optimized for, should be treated with caution.Deep learning based segmentation is applicable to the zebrafish cerebral vasculature, with U-Net based architectures outperforming SegNet architectures.Graphical Abstract


2018 ◽  
Vol 2 (3) ◽  
pp. 324-335 ◽  
Author(s):  
Johannes Kvam ◽  
Lars Erik Gangsei ◽  
Jørgen Kongsro ◽  
Anne H Schistad Solberg

Abstract Computed tomography (CT) scanning of pigs has been shown to produce detailed phenotypes useful in pig breeding. Due to the large number of individuals scanned and corresponding large data sets, there is a need for automatic tools for analysis of these data sets. In this paper, the feasibility of deep learning for fully automatic segmentation of the skeleton of pigs from CT volumes is explored. To maximize performance, given the training data available, a series of problem simplifications are applied. The deep-learning approach can replace our currently used semiautomatic solution, with increased robustness and little or no need for manual control. Accuracy was highly affected by training data, and expanding the training set can further increase performance making this approach especially promising.


2021 ◽  
Author(s):  
David Borland ◽  
Carolyn M. McCormick ◽  
Niyanta K. Patel ◽  
Oleh Krupa ◽  
Jessica T. Mory ◽  
...  

AbstractBackgroundRecent advances in tissue clearing techniques, combined with high-speed image acquisition through light sheet microscopy, enable rapid three-dimensional (3D) imaging of biological specimens, such as whole mouse brains, in a matter of hours. Quantitative analysis of such 3D images can help us understand how changes in brain structure lead to differences in behavior or cognition, but distinguishing features of interest, such as nuclei, from background can be challenging. Recent deep learning-based nuclear segmentation algorithms show great promise for automated segmentation, but require large numbers of manually and accurately labeled nuclei as training data.ResultsWe present Segmentor, an open-source tool for reliable, efficient, and user-friendly manual annotation and refinement of objects (e.g., nuclei) within 3D light sheet microscopy images. Segmentor employs a hybrid 2D-3D approach for visualizing and segmenting objects and contains features for automatic region splitting, designed specifically for streamlining the process of 3D segmentation of nuclei. We show that editing simultaneously in 2D and 3D using Segmentor significantly decreases time spent on manual annotations without affecting accuracy.ConclusionsSegmentor is a tool for increased efficiency of manual annotation and refinement of 3D objects that can be used to train deep learning segmentation algorithms, and is available at https://www.nucleininja.org/ and https://github.com/RENCI/Segmentor.


2021 ◽  
Author(s):  
Esther Puyol-Antón ◽  
Bram Ruijsink ◽  
Jorge Mariscal Harana ◽  
Stefan K Piechnik ◽  
Stefan Neubauer ◽  
...  

Background: Artificial intelligence (AI) techniques have been proposed for automation of cine CMR segmentation for functional quantification. However, in other applications AI models have been shown to have potential for sex and/or racial bias. Objectives: To perform the first analysis of sex/racial bias in AI-based cine CMR segmentation using a large-scale database. Methods: A state-of-the-art deep learning (DL) model was used for automatic segmentation of both ventricles and the myocardium from cine short-axis CMR. The dataset consisted of end-diastole and end-systole short-axis cine CMR images of 5,903 subjects from the UK Biobank database (61.5±7.1 years, 52% male, 81% white). To assess sex and racial bias, we compared Dice scores and errors in measurements of biventricular volumes and function between patients grouped by race and sex. To investigate whether segmentation bias could be explained by potential confounders, a multivariate linear regression and ANCOVA were performed. Results: We found statistically significant differences in Dice scores (white ~94% vs minority ethnic groups 86-89%) as well as in absolute/relative errors in volumetric and functional measures, showing that the AI model was biased against minority racial groups, even after correction for possible confounders. Conclusions: We have shown that racial bias can exist in DL-based cine CMR segmentation models. We believe that this bias is due to the unbalanced nature of the training data (combined with physiological differences). This is supported by the results which show racial bias but not sex bias when trained using the UK Biobank database, which is sex-balanced but not race-balanced.


Author(s):  
Gu Zheng ◽  
Yanfeng Jiang ◽  
Ce Shi ◽  
Hanpei Miao ◽  
Xiangle Yu ◽  
...  

Accurate segmentation of choroidal thickness (CT) and vasculature is important to better analyze and understand the choroid-related ocular diseases. In this paper, we proposed and implemented a novel and practical method based on the deep learning algorithms, residual U-Net, to segment and quantify the CT and vasculature automatically. With limited training data and validation data, the residual U-Net was capable of identifying the choroidal boundaries as precise as the manual segmentation compared with an experienced operator. Then, the trained deep learning algorithms was applied to 217 images and six choroidal relevant parameters were extracted, we found high intraclass correlation coefficients (ICC) of more than 0.964 between manual and automatic segmentation methods. The automatic method also achieved great reproducibility with ICC greater than 0.913, indicating good consistency of the automatic segmentation method. Our results suggested the deep learning algorithms can accurately and efficiently segment choroid boundaries, which will be helpful to quantify the CT and vasculature.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
David Borland ◽  
Carolyn M. McCormick ◽  
Niyanta K. Patel ◽  
Oleh Krupa ◽  
Jessica T. Mory ◽  
...  

Abstract Background Recent advances in tissue clearing techniques, combined with high-speed image acquisition through light sheet microscopy, enable rapid three-dimensional (3D) imaging of biological specimens, such as whole mouse brains, in a matter of hours. Quantitative analysis of such 3D images can help us understand how changes in brain structure lead to differences in behavior or cognition, but distinguishing densely packed features of interest, such as nuclei, from background can be challenging. Recent deep learning-based nuclear segmentation algorithms show great promise for automated segmentation, but require large numbers of accurate manually labeled nuclei as training data. Results We present Segmentor, an open-source tool for reliable, efficient, and user-friendly manual annotation and refinement of objects (e.g., nuclei) within 3D light sheet microscopy images. Segmentor employs a hybrid 2D-3D approach for visualizing and segmenting objects and contains features for automatic region splitting, designed specifically for streamlining the process of 3D segmentation of nuclei. We show that editing simultaneously in 2D and 3D using Segmentor significantly decreases time spent on manual annotations without affecting accuracy as compared to editing the same set of images with only 2D capabilities. Conclusions Segmentor is a tool for increased efficiency of manual annotation and refinement of 3D objects that can be used to train deep learning segmentation algorithms, and is available at https://www.nucleininja.org/ and https://github.com/RENCI/Segmentor.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


Sign in / Sign up

Export Citation Format

Share Document