Sparse representation for colors of 3D point cloud via virtual adaptive sampling

Author(s):  
Junhui Hou ◽  
Lap-Pui Chau ◽  
Ying He ◽  
Philip A. Chou
Author(s):  
M. Samie Tootooni ◽  
Ashley Dsouza ◽  
Ryan Donovan ◽  
Prahalad K. Rao ◽  
Zhenyu (James) Kong ◽  
...  

This work proposes a novel approach for geometric integrity assessment of additive manufactured (AM, 3D printed) components, exemplified by acrylonitrile butadiene styrene (ABS) polymer parts made using fused filament fabrication (FFF) process. The following two research questions are addressed in this paper: (1) what is the effect of FFF process parameters, specifically, infill percentage (If) and extrusion temperature (Te) on geometric integrity of ABS parts?; and (2) what approach is required to differentiate AM parts with respect to their geometric integrity based on sparse sampling from a large (∼ 2 million data points) laser-scanned point cloud dataset? To answer the first question, ABS parts are produced by varying two FFF parameters, namely, infill percentage (If) and extrusion temperature (Te) through design of experiments. The part geometric integrity is assessed with respect to key geometric dimensioning and tolerancing (GD&T) features, such as flatness, circularity, cylindricity, root mean square deviation, and in-tolerance percentage. These GD&T parameters are obtained by laser scanning of the FFF parts. Concurrently, coordinate measurements of the part geometry in the form of 3D point cloud data is also acquired. Through response surface statistical analysis of this experimental data it was found that discrimination of geometric integrity between FFF parts based on GD&T parameters and process inputs alone was unsatisfactory (regression R2 < 50%). This directly motivates the second question. Accordingly, a data-driven analytical approach is proposed to classify the geometric integrity of FFF parts using minimal number (< 2% of total) of laser-scanned 3D point cloud data. The approach uses spectral graph theoretic Laplacian eigenvalues extracted from the 3D point cloud data in conjunction with a modeling framework called sparse representation to classify FFF part quality contingent on the geometric integrity. The practical outcome of this work is a method that can quickly classify the part geometric integrity with minimal point cloud data and high classification fidelity (F-score > 95%), which bypasses tedious coordinate measurement.


2020 ◽  
Vol 29 ◽  
pp. 796-808 ◽  
Author(s):  
Shuai Gu ◽  
Junhui Hou ◽  
Huanqiang Zeng ◽  
Hui Yuan ◽  
Kai-Kuang Ma

Author(s):  
M. Samie Tootooni ◽  
Ashley Dsouza ◽  
Ryan Donovan ◽  
Prahalad K. Rao ◽  
Zhenyu (James) Kong ◽  
...  

The objective of this work is to develop and apply a spectral graph theoretic approach for differentiating between (classifying) additive manufactured (AM) parts contingent on the severity of their dimensional variation from laser-scanned coordinate measurements (3D point cloud). The novelty of the approach is in invoking spectral graph Laplacian eigenvalues as an extracted feature from the laser-scanned 3D point cloud data in conjunction with various machine learning techniques. The outcome is a new method that classifies the dimensional variation of an AM part by sampling less than 5% of the 2 million 3D point cloud data acquired (per part). This is a practically important result, because it reduces the measurement burden for postprocess quality assurance in AM—parts can be laser-scanned and their dimensional variation quickly assessed on the shop floor. To realize the research objective, the procedure is as follows. Test parts are made using the fused filament fabrication (FFF) polymer AM process. The FFF process conditions are varied per a phased design of experiments plan to produce parts with distinctive dimensional variations. Subsequently, each test part is laser scanned and 3D point cloud data are acquired. To classify the dimensional variation among parts, Laplacian eigenvalues are extracted from the 3D point cloud data and used as features within different machine learning approaches. Six machine learning approaches are juxtaposed: sparse representation, k-nearest neighbors, neural network, naïve Bayes, support vector machine, and decision tree. Of these, the sparse representation technique provides the highest classification accuracy (F-score > 97%).


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


2021 ◽  
Vol 13 (4) ◽  
pp. 803
Author(s):  
Lingchen Lin ◽  
Kunyong Yu ◽  
Xiong Yao ◽  
Yangbo Deng ◽  
Zhenbang Hao ◽  
...  

As a key canopy structure parameter, the estimation method of the Leaf Area Index (LAI) has always attracted attention. To explore a potential method to estimate forest LAI from 3D point cloud at low cost, we took photos from different angles of the drone and set five schemes (O (0°), T15 (15°), T30 (30°), OT15 (0° and 15°) and OT30 (0° and 30°)), which were used to reconstruct 3D point cloud of forest canopy based on photogrammetry. Subsequently, the LAI values and the leaf area distribution in the vertical direction derived from five schemes were calculated based on the voxelized model. Our results show that the serious lack of leaf area in the middle and lower layers determines that the LAI estimate of O is inaccurate. For oblique photogrammetry, schemes with 30° photos always provided better LAI estimates than schemes with 15° photos (T30 better than T15, OT30 better than OT15), mainly reflected in the lower part of the canopy, which is particularly obvious in low-LAI areas. The overall structure of the single-tilt angle scheme (T15, T30) was relatively complete, but the rough point cloud details could not reflect the actual situation of LAI well. Multi-angle schemes (OT15, OT30) provided excellent leaf area estimation (OT15: R2 = 0.8225, RMSE = 0.3334 m2/m2; OT30: R2 = 0.9119, RMSE = 0.1790 m2/m2). OT30 provided the best LAI estimation accuracy at a sub-voxel size of 0.09 m and the best checkpoint accuracy (OT30: RMSE [H] = 0.2917 m, RMSE [V] = 0.1797 m). The results highlight that coupling oblique photography and nadiral photography can be an effective solution to estimate forest LAI.


Sign in / Sign up

Export Citation Format

Share Document