scholarly journals Semi-Supervised Deep Learning for Lunar Crater Detection Using CE-2 DOM

2021 ◽  
Vol 13 (14) ◽  
pp. 2819
Author(s):  
Sudong Zang ◽  
Lingli Mu ◽  
Lina Xian ◽  
Wei Zhang

Lunar craters are very important for estimating the geological age of the Moon, studying the evolution of the Moon, and for landing site selection. Due to a lack of labeled samples, processing times due to high-resolution imagery, the small number of suitable detection models, and the influence of solar illumination, Crater Detection Algorithms (CDAs) based on Digital Orthophoto Maps (DOMs) have not yet been well-developed. In this paper, a large number of training data are labeled manually in the Highland and Maria regions, using the Chang’E-2 (CE-2) DOM; however, the labeled data cannot cover all kinds of crater types. To solve the problem of small crater detection, a new crater detection model (Crater R-CNN) is proposed, which can effectively extract the spatial and semantic information of craters from DOM data. As incomplete labeled samples are not conducive for model training, the Two-Teachers Self-training with Noise (TTSN) method is used to train the Crater R-CNN model, thus constructing a new model—called Crater R-CNN with TTSN—which can achieve state-of-the-art performance. To evaluate the accuracy of the model, three other detection models (Mask R-CNN, no-Mask R-CNN, and Crater R-CNN) based on semi-supervised deep learning were used to detect craters in the Highland and Maria regions. The results indicate that Crater R-CNN with TTSN achieved the highest precision (of 91.4% and 88.5%, respectively) in the Highland and Maria regions, even obtaining the highest recall and F1 score. Compared with Mask R-CNN, no-Mask R-CNN, and Crater R-CNN, Crater R-CNN with TTSN had strong robustness and better generalization ability for crater detection within 1 km in different terrains, making it possible to detect small craters with high accuracy when using DOM data.

2019 ◽  
Vol 11 (11) ◽  
pp. 1309 ◽  
Author(s):  
Ben G. Weinstein ◽  
Sergio Marconi ◽  
Stephanie Bohlman ◽  
Alina Zare ◽  
Ethan White

Remote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in RGB imagery while using a semi-supervised deep learning detection network. Individual crown delineation has been a long-standing challenge in remote sensing and available algorithms produce mixed results. We show that deep learning models can leverage existing Light Detection and Ranging (LIDAR)-based unsupervised delineation to generate trees that are used for training an initial RGB crown detection model. Despite limitations in the original unsupervised detection approach, this noisy training data may contain information from which the neural network can learn initial tree features. We then refine the initial model using a small number of higher-quality hand-annotated RGB images. We validate our proposed approach while using an open-canopy site in the National Ecological Observation Network. Our results show that a model using 434,551 self-generated trees with the addition of 2848 hand-annotated trees yields accurate predictions in natural landscapes. Using an intersection-over-union threshold of 0.5, the full model had an average tree crown recall of 0.69, with a precision of 0.61 for the visually-annotated data. The model had an average tree detection rate of 0.82 for the field collected stems. The addition of a small number of hand-annotated trees improved the performance over the initial self-supervised model. This semi-supervised deep learning approach demonstrates that remote sensing can overcome a lack of labeled training data by generating noisy data for initial training using unsupervised methods and retraining the resulting models with high quality labeled data.


2021 ◽  
Vol 11 ◽  
Author(s):  
He Sui ◽  
Ruhang Ma ◽  
Lin Liu ◽  
Yaozong Gao ◽  
Wenhai Zhang ◽  
...  

ObjectiveTo develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images.MethodsWe retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared.ResultsThe sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively.ConclusionsDeep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Dennis Segebarth ◽  
Matthias Griebel ◽  
Nikolai Stein ◽  
Cora R von Collenberg ◽  
Corinna Martin ◽  
...  

Bioimage analysis of fluorescent labels is widely used in the life sciences. Recent advances in deep learning (DL) allow automating time-consuming manual image analysis processes based on annotated training data. However, manual annotation of fluorescent features with a low signal-to-noise ratio is somewhat subjective. Training DL models on subjective annotations may be instable or yield biased models. In turn, these models may be unable to reliably detect biological effects. An analysis pipeline integrating data annotation, ground truth estimation, and model training can mitigate this risk. To evaluate this integrated process, we compared different DL-based analysis approaches. With data from two model organisms (mice, zebrafish) and five laboratories, we show that ground truth estimation from multiple human annotators helps to establish objectivity in fluorescent feature annotations. Furthermore, ensembles of multiple models trained on the estimated ground truth establish reliability and validity. Our research provides guidelines for reproducible DL-based bioimage analyses.


2021 ◽  
Vol 33 (5) ◽  
pp. 83-104
Author(s):  
Aleksandr Igorevich Getman ◽  
Maxim Nikolaevich Goryunov ◽  
Andrey Georgievich Matskevich ◽  
Dmitry Aleksandrovich Rybolovlev

The paper discusses the issues of training models for detecting computer attacks based on the use of machine learning methods. The results of the analysis of publicly available training datasets and tools for analyzing network traffic and identifying features of network sessions are presented sequentially. The drawbacks of existing tools and possible errors in the datasets formed with their help are noted. It is concluded that it is necessary to collect own training data in the absence of guarantees of the public datasets reliability and the limited use of pre-trained models in networks with characteristics that differ from the characteristics of the network in which the training traffic was collected. A practical approach to generating training data for computer attack detection models is proposed. The proposed solutions have been tested to evaluate the quality of model training on the collected data and the quality of attack detection in conditions of real network infrastructure.


Microscopy ◽  
2021 ◽  
Author(s):  
Kohki Konishi ◽  
Takao Nonaka ◽  
Shunsuke Takei ◽  
Keisuke Ohta ◽  
Hideo Nishioka ◽  
...  

Abstract Three-dimensional (3D) observation of a biological sample using serial-section electron microscopy is widely used. However, organelle segmentation requires a significant amount of manual time. Therefore, several studies have been conducted to improve their efficiency. One such promising method is 3D deep learning (DL), which is highly accurate. However, the creation of training data for 3D DL still requires manual time and effort. In this study, we developed a highly efficient integrated image segmentation tool that includes stepwise DL with manual correction. The tool has four functions: efficient tracers for annotation, model training/inference for organelle segmentation using a lightweight convolutional neural network, efficient proofreading, and model refinement. We applied this tool to increase the training data step by step (stepwise annotation method) to segment the mitochondria in the cells of the cerebral cortex. We found that the stepwise annotation method reduced the manual operation time by one-third compared with that of the fully manual method, where all the training data were created manually. Moreover, we demonstrated that the F1 score, the metric of segmentation accuracy, was 0.9 by training the 3D DL model with these training data. The stepwise annotation method using this tool and the 3D DL model improved the segmentation efficiency for various organelles.


2021 ◽  
Vol 11 (22) ◽  
pp. 10966
Author(s):  
Hsiang-Chieh Chen ◽  
Zheng-Ting Li

This article introduces an automated data-labeling approach for generating crack ground truths (GTs) within concrete images. The main algorithm includes generating first-round GTs, pre-training a deep learning-based model, and generating second-round GTs. On the basis of the generated second-round GTs of the training data, a learning-based crack detection model can be trained in a self-supervised manner. The pre-trained deep learning-based model is effective for crack detection after it is re-trained using the second-round GTs. The main contribution of this study is the proposal of an automated GT generation process for training a crack detection model at the pixel level. Experimental results show that the second-round GTs are similar to manually marked labels. Accordingly, the cost of implementing learning-based methods is reduced significantly because data labeling by humans is not necessitated.


Author(s):  
Y. Li ◽  
M. Sakamoto ◽  
T. Shinohara ◽  
T. Satoh

Abstract. Label placement is one of the most essential tasks in the fields of cartography and geographic information systems. Numerous studies have been conducted on the automatic label placement for the past few decades. In this study, we focus on automatic label placement of area-feature, which has been relatively less studied than that of point-feature and line-feature. Most of the existing approaches have adopted a rule-based algorithm, and there are limitations in expressing the characteristics of label placement for area-features of various shapes utilizing handcrafted rules, criteria, objective functions, etc. Hence, we propose a novel approach for automatic label placement of area-feature based on deep learning. The aim of the proposed approach is to obtain the complex and implicit characteristics of area-feature label placement by manual operation directly and automatically from training data. First, the area-features with vector format are converted into a binary image. Then a key-point detection model, which simultaneously detect and localize specific key-points from an image, is applied to the binary image to estimate the candidate positions of labels. Finally, the final label placement positions for each area-feature are determined via simple post-process. To evaluate the proposed approach, the experiments with cadastral data were conducted. The experimental results show that the ratios of the estimation errors within 1.2 m (corresponding to one pixel of the input image) were 92.6% and 94.5% in the center and upper-left placement style, respectively. It implies that the proposed approach could place the labels for area-features automatically and accurately.


2021 ◽  
Vol 13 (19) ◽  
pp. 3919
Author(s):  
Jiawei Mo ◽  
Yubin Lan ◽  
Dongzi Yang ◽  
Fei Wen ◽  
Hongbin Qiu ◽  
...  

Instance segmentation of fruit tree canopies from images acquired by unmanned aerial vehicles (UAVs) is of significance for the precise management of orchards. Although deep learning methods have been widely used in the fields of feature extraction and classification, there are still phenomena of complex data and strong dependence on software performances. This paper proposes a deep learning-based instance segmentation method of litchi trees, which has a simple structure and lower requirements for data form. Considering that deep learning models require a large amount of training data, a labor-friendly semi-auto method for image annotation is introduced. The introduction of this method allows for a significant improvement in the efficiency of data pre-processing. Facing the high requirement of a deep learning method for computing resources, a partition-based method is presented for the segmentation of high-resolution digital orthophoto maps (DOMs). Citrus data is added to the training set to alleviate the lack of diversity of the original litchi dataset. The average precision (AP) is selected to evaluate the metric of the proposed model. The results show that with the help of training with the litchi-citrus datasets, the best AP on the test set reaches 96.25%.


Object detection is closely related with video and image analysis. Under computer vision technology, object detection model training with image-level labels only is challenging research area.Researchers have not yet discovered accurate model for Weakly Supervised Object Detection (WSOD). WSOD is used for detecting and localizing the objects under the supervision of image level annotations only.The proposed work usesself-paced approach which is applied on region proposal network of Faster R-CNN architecture which gives better solution from previous weakly-supervised object detectors and it can be applied for computer visionapplications in near future.


Sign in / Sign up

Export Citation Format

Share Document