Deep-Learning Based Automated Neuron Reconstruction from 3D Microscopy Images Using Synthetic Training Images

Author(s):  
Weixun Chen ◽  
Min Liu ◽  
Hao Du ◽  
Miroslav Radojevic ◽  
Yaonan Wang ◽  
...  
2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Dong Sui ◽  
Kuanquan Wang ◽  
Jinseok Chae ◽  
Yue Zhang ◽  
Henggui Zhang

Neuron’s shape and dendritic architecture are important for biosignal transduction in neuron networks. And the anatomy architecture reconstruction of neuron cell is one of the foremost challenges and important issues in neuroscience. Accurate reconstruction results can facilitate the subsequent neuron system simulation. With the development of confocal microscopy technology, researchers can scan neurons at submicron resolution for experiments. These make the reconstruction of complex dendritic trees become more feasible; however, it is still a tedious, time consuming, and labor intensity task. For decades, computer aided methods have been playing an important role in this task, but none of the prevalent algorithms can reconstruct full anatomy structure automatically. All of these make it essential for developing new method for reconstruction. This paper proposes a pipeline with a novel seeding method for reconstructing neuron structures from 3D microscopy images stacks. The pipeline is initialized with a set of seeds detected by sliding volume filter (SVF), and then the open curve snake is applied to the detected seeds for reconstructing the full structure of neuron cells. The experimental results demonstrate that the proposed pipeline exhibits excellent performance in terms of accuracy compared with traditional method, which is clearly a benefit for 3D neuron detection and reconstruction.


2017 ◽  
Vol 36 (7) ◽  
pp. 1533-1541 ◽  
Author(s):  
Rongjian Li ◽  
Tao Zeng ◽  
Hanchuan Peng ◽  
Shuiwang Ji

Author(s):  
Yue Guo ◽  
Oleh Krupa ◽  
Jason Stein ◽  
Guorong Wu ◽  
Ashok Krishnamurthy

2021 ◽  
Vol 13 (14) ◽  
pp. 2822
Author(s):  
Zhe Lin ◽  
Wenxuan Guo

An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Grzegorz Bokota ◽  
Jacek Sroka ◽  
Subhadip Basu ◽  
Nirmal Das ◽  
Pawel Trzaskoma ◽  
...  

Abstract Background Bioimaging techniques offer a robust tool for studying molecular pathways and morphological phenotypes of cell populations subjected to various conditions. As modern high-resolution 3D microscopy provides access to an ever-increasing amount of high-quality images, there arises a need for their analysis in an automated, unbiased, and simple way. Segmentation of structures within the cell nucleus, which is the focus of this paper, presents a new layer of complexity in the form of dense packing and significant signal overlap. At the same time, the available segmentation tools provide a steep learning curve for new users with a limited technical background. This is especially apparent in the bulk processing of image sets, which requires the use of some form of programming notation. Results In this paper, we present PartSeg, a tool for segmentation and reconstruction of 3D microscopy images, optimised for the study of the cell nucleus. PartSeg integrates refined versions of several state-of-the-art algorithms, including a new multi-scale approach for segmentation and quantitative analysis of 3D microscopy images. The features and user-friendly interface of PartSeg were carefully planned with biologists in mind, based on analysis of multiple use cases and difficulties encountered with other tools, to offer an ergonomic interface with a minimal entry barrier. Bulk processing in an ad-hoc manner is possible without the need for programmer support. As the size of datasets of interest grows, such bulk processing solutions become essential for proper statistical analysis of results. Advanced users can use PartSeg components as a library within Python data processing and visualisation pipelines, for example within Jupyter notebooks. The tool is extensible so that new functionality and algorithms can be added by the use of plugins. For biologists, the utility of PartSeg is presented in several scenarios, showing the quantitative analysis of nuclear structures. Conclusions In this paper, we have presented PartSeg which is a tool for precise and verifiable segmentation and reconstruction of 3D microscopy images. PartSeg is optimised for cell nucleus analysis and offers multi-scale segmentation algorithms best-suited for this task. PartSeg can also be used for the bulk processing of multiple images and its components can be reused in other systems or computational experiments.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4582
Author(s):  
Changjie Cai ◽  
Tomoki Nishimura ◽  
Jooyeon Hwang ◽  
Xiao-Ming Hu ◽  
Akio Kuroda

Fluorescent probes can be used to detect various types of asbestos (serpentine and amphibole groups); however, the fiber counting using our previously developed software was not accurate for samples with low fiber concentration. Machine learning-based techniques (e.g., deep learning) for image analysis, particularly Convolutional Neural Networks (CNN), have been widely applied to many areas. The objectives of this study were to (1) create a database of a wide-range asbestos concentration (0–50 fibers/liter) fluorescence microscopy (FM) images in the laboratory; and (2) determine the applicability of the state-of-the-art object detection CNN model, YOLOv4, to accurately detect asbestos. We captured the fluorescence microscopy images containing asbestos and labeled the individual asbestos in the images. We trained the YOLOv4 model with the labeled images using one GTX 1660 Ti Graphics Processing Unit (GPU). Our results demonstrated the exceptional capacity of the YOLOv4 model to learn the fluorescent asbestos morphologies. The mean average precision at a threshold of 0.5 ([email protected]) was 96.1% ± 0.4%, using the National Institute for Occupational Safety and Health (NIOSH) fiber counting Method 7400 as a reference method. Compared to our previous counting software (Intec/HU), the YOLOv4 achieved higher accuracy (0.997 vs. 0.979), particularly much higher precision (0.898 vs. 0.418), recall (0.898 vs. 0.780) and F-1 score (0.898 vs. 0.544). In addition, the YOLOv4 performed much better for low fiber concentration samples (<15 fibers/liter) compared to Intec/HU. Therefore, the FM method coupled with YOLOv4 is remarkable in detecting asbestos fibers and differentiating them from other non-asbestos particles.


Author(s):  
J.M. Murray ◽  
P. Pfeffer ◽  
R. Seifert ◽  
A. Hermann ◽  
J. Handke ◽  
...  

Objective: Manual plaque segmentation in microscopy images is a time-consuming process in atherosclerosis research and potentially subject to unacceptable user-to-user variability and observer bias. We address this by releasing Vesseg a tool that includes state-of-the-art deep learning models for atherosclerotic plaque segmentation. Approach and Results: Vesseg is a containerized, extensible, open-source, and user-oriented tool. It includes 2 models, trained and tested on 1089 hematoxylin-eosin stained mouse model atherosclerotic brachiocephalic artery sections. The models were compared to 3 human raters. Vesseg can be accessed at https://vesseg .online or downloaded. The models show mean Soerensen-Dice scores of 0.91±0.15 for plaque and 0.97±0.08 for lumen pixels. The mean accuracy is 0.98±0.05. Vesseg is already in active use, generating time savings of >10 minutes per slide. Conclusions: Vesseg brings state-of-the-art deep learning methods to atherosclerosis research, providing drastic time savings, while allowing for continuous improvement of models and the underlying pipeline.


Author(s):  
Réka Hollandi ◽  
Ákos Diósdi ◽  
Gábor Hollandi ◽  
Nikita Moshkov ◽  
Péter Horváth

AbstractAnnotatorJ combines single-cell identification with deep learning and manual annotation. Cellular analysis quality depends on accurate and reliable detection and segmentation of cells so that the subsequent steps of analyses e.g. expression measurements may be carried out precisely and without bias. Deep learning has recently become a popular way of segmenting cells, performing unimaginably better than conventional methods. However, such deep learning applications may be trained on a large amount of annotated data to be able to match the highest expectations. High-quality annotations are unfortunately expensive as they require field experts to create them, and often cannot be shared outside the lab due to medical regulations.We propose AnnotatorJ, an ImageJ plugin for the semi-automatic annotation of cells (or generally, objects of interest) on (not only) microscopy images in 2D that helps find the true contour of individual objects by applying U-Net-based pre-segmentation. The manual labour of hand-annotating cells can be significantly accelerated by using our tool. Thus, it enables users to create such datasets that could potentially increase the accuracy of state-of-the-art solutions, deep learning or otherwise, when used as training data.


Sign in / Sign up

Export Citation Format

Share Document