scholarly journals Label3DMaize: toolkit for 3D point cloud data annotation of maize shoots

GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.

Author(s):  
A. Wichmann ◽  
A. Agoub ◽  
M. Kada

Machine learning methods have gained in importance through the latest development of artificial intelligence and computer hardware. Particularly approaches based on deep learning have shown that they are able to provide state-of-the-art results for various tasks. However, the direct application of deep learning methods to improve the results of 3D building reconstruction is often not possible due, for example, to the lack of suitable training data. To address this issue, we present RoofN3D which provides a new 3D point cloud training dataset that can be used to train machine learning models for different tasks in the context of 3D building reconstruction. It can be used, among others, to train semantic segmentation networks or to learn the structure of buildings and the geometric model construction. Further details about RoofN3D and the developed data preparation framework, which enables the automatic derivation of training data, are described in this paper. Furthermore, we provide an overview of other available 3D point cloud training data and approaches from current literature in which solutions for the application of deep learning to unstructured and not gridded 3D point cloud data are presented.


Author(s):  
P. Wang ◽  
W. Yao

Abstract. Competitive point cloud semantic segmentation results usually rely on a large amount of labeled data. However, data annotation is a time-consuming and labor-intensive task, particularly for three-dimensional point cloud data. Thus, obtaining accurate results with limited ground truth as training data is considerably important. As a simple and effective method, pseudo labels can use information from unlabeled data for training neural networks. In this study, we propose a pseudo-label-assisted point cloud segmentation method with very few sparsely sampled labels that are normally randomly selected for each class. An adaptive thresholding strategy was proposed to generate a pseudo-label based on the prediction probability. Pseudo-label learning is an iterative process, and pseudo labels were updated solely on ground-truth weak labels as the model converged to improve the training efficiency. Experiments using the ISPRS 3D sematic labeling benchmark dataset indicated that our proposed method achieved an equally competitive result compared to that using a full supervision scheme with only up to 2‰ of labeled points from the original training set, with an overall accuracy of 83.7% and an average F1 score of 70.2%.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 307
Author(s):  
Minseok Kim ◽  
Sung Ho Choi ◽  
Kyeong-Beom Park ◽  
Jae Yeol Lee

Typical AR methods have generic problems such as visual mismatching, incorrect occlusions, and limited augmentation due to the inability to estimate depth from AR images and attaching the AR markers onto physical objects, which prevents the industrial worker from conducting manufacturing tasks effectively. This paper proposes a hybrid approach to industrial AR for complementing existing AR methods using deep learning-based facility segmentation and depth prediction without AR markers and a depth camera. First, the outlines of physical objects are extracted by applying a deep learning-based instance segmentation method to the RGB image acquired from the AR camera. Simultaneously, a depth prediction method is applied to the AR image to estimate the depth map as a 3D point cloud for the detected object. Based on the segmented 3D point cloud data, 3D spatial relationships among the physical objects are calculated, which can assist in solving the visual mismatch and occlusion problems properly. In addition, it can deal with a dynamically operating or a moving facility, such as a robot—the conventional AR cannot do so. For these reasons, the proposed approach can be utilized as a hybrid or complementing function to existing AR methods, since it can be activated whenever the industrial worker requires handing of visual mismatches or occlusions. Quantitative and qualitative analyses verify the advantage of the proposed approach compared with existing AR methods. Some case studies also prove that the proposed method can be applied not only to manufacturing but also to other fields. These studies confirm the scalability, effectiveness, and originality of this proposed approach.


2021 ◽  
Author(s):  
Thanasis Zoumpekas ◽  
Guillem Molina ◽  
Maria Salamó ◽  
Anna Puig

Point clouds are currently used for a variety of applications, such as detection tasks in medical and geological domains. Intelligent analysis of point clouds is considered a highly computationally demanding and challenging task, especially the segmentation task among the points. Although numerous deep learning models have recently been proposed to segment point cloud data, there is no clear instruction of which exactly neural network to utilize and then incorporate into a system dealing with point cloud segmentation analysis. Besides, the majority of the developed models emphasize more on the accuracy rather than the efficiency, in order to achieve great results. Consequently, the training, validation and testing phases of the models require a great number of processing hours and a huge amount of memory. These high computational requirements are commonly difficult to deal with for many users. In this article, we analyse five state-of-the-art deep learning models for part segmentation task and give meaningful insights into the utilization of each one. We advance guidelines based on different properties, considering both learning-related metrics, such as accuracy, and system-related metrics, such as run time and memory footprint. We further propose and analyse generalized performance metrics, which facilitate the model evaluation phase in segmentation tasks allowing users to select the most appropriate approach for their context in terms of accuracy and efficiency.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 190529-190538
Author(s):  
Jinseok Hong ◽  
Keeyoung Kim ◽  
Hongchul Lee

Sign in / Sign up

Export Citation Format

Share Document