Reducing Manual Operation Time to Obtain a Segmentation Learning Model for Volume Electron Microscopy Using Stepwise Deep Learning With Manual Correction

Microscopy ◽  
2021 ◽  
Author(s):  
Kohki Konishi ◽  
Takao Nonaka ◽  
Shunsuke Takei ◽  
Keisuke Ohta ◽  
Hideo Nishioka ◽  
...  

Abstract Three-dimensional (3D) observation of a biological sample using serial-section electron microscopy is widely used. However, organelle segmentation requires a significant amount of manual time. Therefore, several studies have been conducted to improve their efficiency. One such promising method is 3D deep learning (DL), which is highly accurate. However, the creation of training data for 3D DL still requires manual time and effort. In this study, we developed a highly efficient integrated image segmentation tool that includes stepwise DL with manual correction. The tool has four functions: efficient tracers for annotation, model training/inference for organelle segmentation using a lightweight convolutional neural network, efficient proofreading, and model refinement. We applied this tool to increase the training data step by step (stepwise annotation method) to segment the mitochondria in the cells of the cerebral cortex. We found that the stepwise annotation method reduced the manual operation time by one-third compared with that of the fully manual method, where all the training data were created manually. Moreover, we demonstrated that the F1 score, the metric of segmentation accuracy, was 0.9 by training the 3D DL model with these training data. The stepwise annotation method using this tool and the 3D DL model improved the segmentation efficiency for various organelles.

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Dennis Segebarth ◽  
Matthias Griebel ◽  
Nikolai Stein ◽  
Cora R von Collenberg ◽  
Corinna Martin ◽  
...  

Bioimage analysis of fluorescent labels is widely used in the life sciences. Recent advances in deep learning (DL) allow automating time-consuming manual image analysis processes based on annotated training data. However, manual annotation of fluorescent features with a low signal-to-noise ratio is somewhat subjective. Training DL models on subjective annotations may be instable or yield biased models. In turn, these models may be unable to reliably detect biological effects. An analysis pipeline integrating data annotation, ground truth estimation, and model training can mitigate this risk. To evaluate this integrated process, we compared different DL-based analysis approaches. With data from two model organisms (mice, zebrafish) and five laboratories, we show that ground truth estimation from multiple human annotators helps to establish objectivity in fluorescent feature annotations. Furthermore, ensembles of multiple models trained on the estimated ground truth establish reliability and validity. Our research provides guidelines for reproducible DL-based bioimage analyses.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Inna V Nechipurenko ◽  
Cristina Berciu ◽  
Piali Sengupta ◽  
Daniela Nicastro

The primary cilium is nucleated by the mother centriole-derived basal body (BB) via as yet poorly characterized mechanisms. BBs have been reported to degenerate following ciliogenesis in the C. elegans embryo, although neither BB architecture nor early ciliogenesis steps have been described in this organism. In a previous study (Doroquez et al., 2014), we described the three-dimensional morphologies of sensory neuron cilia in adult C. elegans hermaphrodites at high resolution. Here, we use serial section electron microscopy and tomography of staged C. elegans embryos to demonstrate that BBs remodel to support ciliogenesis in a subset of sensory neurons. We show that centriolar singlet microtubules are converted into BB doublets which subsequently grow asynchronously to template the ciliary axoneme, visualize degeneration of the centriole core, and define the developmental stage at which the transition zone is established. Our work provides a framework for future investigations into the mechanisms underlying BB remodeling.


Author(s):  
A. Nurunnabi ◽  
F. N. Teferle ◽  
J. Li ◽  
R. C. Lindenbergh ◽  
A. Hunegnaw

Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Chentao Wen ◽  
Takuya Miura ◽  
Venkatakaushik Voleti ◽  
Kazushi Yamaguchi ◽  
Motosuke Tsutsumi ◽  
...  

Despite recent improvements in microscope technologies, segmenting and tracking cells in three-dimensional time-lapse images (3D + T images) to extract their dynamic positions and activities remains a considerable bottleneck in the field. We developed a deep learning-based software pipeline, 3DeeCellTracker, by integrating multiple existing and new techniques including deep learning for tracking. With only one volume of training data, one initial correction, and a few parameter changes, 3DeeCellTracker successfully segmented and tracked ~100 cells in both semi-immobilized and ‘straightened’ freely moving worm's brain, in a naturally beating zebrafish heart, and ~1000 cells in a 3D cultured tumor spheroid. While these datasets were imaged with highly divergent optical systems, our method tracked 90–100% of the cells in most cases, which is comparable or superior to previous results. These results suggest that 3DeeCellTracker could pave the way for revealing dynamic cell activities in image datasets that have been difficult to analyze.


2021 ◽  
Vol 13 (14) ◽  
pp. 2819
Author(s):  
Sudong Zang ◽  
Lingli Mu ◽  
Lina Xian ◽  
Wei Zhang

Lunar craters are very important for estimating the geological age of the Moon, studying the evolution of the Moon, and for landing site selection. Due to a lack of labeled samples, processing times due to high-resolution imagery, the small number of suitable detection models, and the influence of solar illumination, Crater Detection Algorithms (CDAs) based on Digital Orthophoto Maps (DOMs) have not yet been well-developed. In this paper, a large number of training data are labeled manually in the Highland and Maria regions, using the Chang’E-2 (CE-2) DOM; however, the labeled data cannot cover all kinds of crater types. To solve the problem of small crater detection, a new crater detection model (Crater R-CNN) is proposed, which can effectively extract the spatial and semantic information of craters from DOM data. As incomplete labeled samples are not conducive for model training, the Two-Teachers Self-training with Noise (TTSN) method is used to train the Crater R-CNN model, thus constructing a new model—called Crater R-CNN with TTSN—which can achieve state-of-the-art performance. To evaluate the accuracy of the model, three other detection models (Mask R-CNN, no-Mask R-CNN, and Crater R-CNN) based on semi-supervised deep learning were used to detect craters in the Highland and Maria regions. The results indicate that Crater R-CNN with TTSN achieved the highest precision (of 91.4% and 88.5%, respectively) in the Highland and Maria regions, even obtaining the highest recall and F1 score. Compared with Mask R-CNN, no-Mask R-CNN, and Crater R-CNN, Crater R-CNN with TTSN had strong robustness and better generalization ability for crater detection within 1 km in different terrains, making it possible to detect small craters with high accuracy when using DOM data.


1986 ◽  
Vol 102 (5) ◽  
pp. 1654-1665 ◽  
Author(s):  
E G Fey ◽  
G Krochmalnic ◽  
S Penman

The nonchromatin structure or matrix of the nucleus has been studied using an improved fractionation in concert with resinless section electron microscopy. The resinless sections show the nucleus of the intact cell to be filled with a dense network or lattice composed of soluble proteins and chromatin in addition to the structural nuclear constituents. In the first fractionation step, soluble proteins are removed by extraction with Triton X-100, and the dense nuclear lattice largely disappears. Chromatin and nonchromatin nuclear fibers are now sharply imaged. Nuclear constituents are further separated into three well-defined, distinct protein fractions. Chromatin proteins are those that require intact DNA for their association with the nucleus and are released by 0.25 M ammonium sulfate after internucleosomal DNA is cut with DNAase I. The resulting structure retains most heterogeneous nuclear ribonucleoprotein (hnRNP) and is designated the RNP-containing nuclear matrix. The proteins of hnRNP are those associated with the nucleus only if RNA is intact. These are released when nuclear RNA is briefly digested with RNAase A. Ribonuclease digestion releases 97% of the hnRNA and its associated proteins. These proteins correspond to the hnRNP described by Pederson (Pederson, T., 1974, J. Mol. Biol., 83:163-184) and are distinct from the proteins that remain in the ribonucleoprotein (RNP)-depleted nuclear matrix. The RNP-depleted nuclear matrix is a core structure that retains lamins A and C, the intermediate filaments, and a unique set of nuclear matrix proteins (Fey, E. G., K. M. Wan, and S. Penman, 1984, J. Cell Biol. 98:1973-1984). This core had been previously designated the nuclear matrix-intermediate filament scaffold and its proteins are a third, distinct, and nonoverlapping subset of the nuclear nonhistone proteins. Visualizing the nuclear matrix using resinless sections shows that nuclear RNA plays an important role in matrix organization. Conventional Epon-embedded electron microscopy sections show comparatively little of the RNP-containing and RNP-depleted nuclear matrix structure. In contrast, resinless sections show matrix interior to be a three-dimensional network of thick filaments bounded by the nuclear lamina. The filaments are covered with 20-30-nm electron dense particles which may contain the hnRNA. The large electron dense bodies, enmeshed in the interior matrix fibers, have the characteristic morphology of nucleoli. Treatment of the nuclear matrix with RNAase results in the aggregation of the interior fibers and the extensive loss of the 20-30-nm particles.(ABSTRACT TRUNCATED AT 400 WORDS)


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Kenneth W. Dunn ◽  
Chichen Fu ◽  
David Joon Ho ◽  
Soonam Lee ◽  
Shuo Han ◽  
...  

AbstractThe scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.


Sign in / Sign up

Export Citation Format

Share Document