scholarly journals Robust Surface Reconstruction of Plant Leaves from 3D Point Clouds

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ryuhei Ando ◽  
Yuko Ozasa ◽  
Wei Guo

The automation of plant phenotyping using 3D imaging techniques is indispensable. However, conventional methods for reconstructing the leaf surface from 3D point clouds have a trade-off between the accuracy of leaf surface reconstruction and the method’s robustness against noise and missing points. To mitigate this trade-off, we developed a leaf surface reconstruction method that reduces the effects of noise and missing points while maintaining surface reconstruction accuracy by capturing two components of the leaf (the shape and distortion of that shape) separately using leaf-specific properties. This separation simplifies leaf surface reconstruction compared with conventional methods while increasing the robustness against noise and missing points. To evaluate the proposed method, we reconstructed the leaf surfaces from 3D point clouds of leaves acquired from two crop species (soybean and sugar beet) and compared the results with those of conventional methods. The result showed that the proposed method robustly reconstructed the leaf surfaces, despite the noise and missing points for two different leaf shapes. To evaluate the stability of the leaf surface reconstructions, we also calculated the leaf surface areas for 14 consecutive days of the target leaves. The result derived from the proposed method showed less variation of values and fewer outliers compared with the conventional methods.

Author(s):  
S. Song ◽  
R. Qin

Abstract. Image-based 3D modelling are rather mature nowadays with well-acquired images through standard photogrammetric processing pipeline, while fusing 3D dataset generated from images with different views for surface reconstruction remains to be a challenge. Meshing algorithms for image-based 3D dataset requires visibility information for surfaces and such information can be difficult to obtain for 3D point clouds generated from images with different views, sources, resolutions and uncertainties. In this paper, we propose a novel multi-source mesh reconstruction and texture mapping pipeline optimized to address such a challenge. Our key contributions are 1) we extended state-of-the-art image-based surface reconstruction method by incorporating geometric information produced by satellite images to create wide-area surface model. 2) We extended a texture mapping method to accommodate images acquired from different sensors, i.e. side-view perspective images and satellite images. Experiments show that our method creates conforming surface model from these two sources, as well as consistent and well-balanced textures from images with drastically different radiometry (satellite images vs. street-view level images). We compared our proposed pipeline with a typical fusion pipeline - Poisson reconstruction and the results show that our pipeline shows distinctive advantages.


2021 ◽  
Vol 13 (9) ◽  
pp. 1859
Author(s):  
Xiangyang Liu ◽  
Yaxiong Wang ◽  
Feng Kang ◽  
Yang Yue ◽  
Yongjun Zheng

The characteristic parameters of Citrus grandis var. Longanyou canopies are important when measuring yield and spraying pesticides. However, the feasibility of the canopy reconstruction method based on point clouds has not been confirmed with these canopies. Therefore, LiDAR point cloud data for C. grandis var. Longanyou were obtained to facilitate the management of groves of this species. Then, a cloth simulation filter and European clustering algorithm were used to realize individual canopy extraction. After calculating canopy height and width, canopy reconstruction and volume calculation were realized using six approaches: by a manual method and using five algorithms based on point clouds (convex hull, CH; convex hull by slices; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS). ASBS is an innovative algorithm that combines AS with slices optimization, and can best approximate the actual canopy shape. Moreover, the CH algorithm had the shortest run time, and the R2 values of VCH, VVB, VAS, and VASBS algorithms were above 0.87. The volume with the highest accuracy was obtained from the ASBS algorithm, and the CH algorithm had the shortest computation time. In addition, a theoretical but preliminarily system suitable for the calculation of the canopy volume of C. grandis var. Longanyou was developed, which provides a theoretical reference for the efficient and accurate realization of future functional modules such as accurate plant protection, orchard obstacle avoidance, and biomass estimation.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Author(s):  
Fouad Amer ◽  
Mani Golparvar-Fard

Complete and accurate 3D monitoring of indoor construction progress using visual data is challenging. It requires (a) capturing a large number of overlapping images, which is time-consuming and labor-intensive to collect, and (b) processing using Structure from Motion (SfM) algorithms, which can be computationally expensive. To address these inefficiencies, this paper proposes a hybrid SfM-SLAM 3D reconstruction algorithm along with a decentralized data collection workflow to map indoor construction work locations in 3D and any desired frequency. The hybrid 3D reconstruction method is composed of a pipeline of Structure from Motion (SfM) coupled with Multi-View Stereo (MVS) to generate 3D point clouds and a SLAM (Simultaneous Localization and Mapping) algorithm to register the separately formed models together. Our SfM and SLAM pipelines are built on binary Oriented FAST and Rotated BRIEF (ORB) descriptors to tightly couple these two separate reconstruction workflows and enable fast computation. To elaborate the data capture workflow and validate the proposed method, a case study was conducted on a real-world construction site. Compared to state-of-the-art methods, our preliminary results show a decrease in both registration error and processing time, demonstrating the potential of using daily images captured by different trades coupled with weekly walkthrough videos captured by a field engineer for complete 3D visual monitoring of indoor construction operations.


2017 ◽  
Vol 38 ◽  
pp. 77-89 ◽  
Author(s):  
Florian Bernard ◽  
Luis Salamanca ◽  
Johan Thunberg ◽  
Alexander Tack ◽  
Dennis Jentsch ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0247243
Author(s):  
Nived Chebrolu ◽  
Federico Magistri ◽  
Thomas Läbe ◽  
Cyrill Stachniss

Plant phenotyping is a central task in crop science and plant breeding. It involves measuring plant traits to describe the anatomy and physiology of plants and is used for deriving traits and evaluating plant performance. Traditional methods for phenotyping are often time-consuming operations involving substantial manual labor. The availability of 3D sensor data of plants obtained from laser scanners or modern depth cameras offers the potential to automate several of these phenotyping tasks. This automation can scale up the phenotyping measurements and evaluations that have to be performed to a larger number of plant samples and at a finer spatial and temporal resolution. In this paper, we investigate the problem of registering 3D point clouds of the plants over time and space. This means that we determine correspondences between point clouds of plants taken at different points in time and register them using a new, non-rigid registration approach. This approach has the potential to form the backbone for phenotyping applications aimed at tracking the traits of plants over time. The registration task involves finding data associations between measurements taken at different times while the plants grow and change their appearance, allowing 3D models taken at different points in time to be compared with each other. Registering plants over time is challenging due to its anisotropic growth, changing topology, and non-rigid motion in between the time of the measurements. Thus, we propose a novel approach that first extracts a compact representation of the plant in the form of a skeleton that encodes both topology and semantic information, and then use this skeletal structure to determine correspondences over time and drive the registration process. Through this approach, we can tackle the data association problem for the time-series point cloud data of plants effectively. We tested our approach on different datasets acquired over time and successfully registered the 3D plant point clouds recorded with a laser scanner. We demonstrate that our method allows for developing systems for automated temporal plant-trait analysis by tracking plant traits at an organ level.


Author(s):  
D. Craciun ◽  
A. Serna Morales ◽  
J.-E. Deschaud ◽  
B. Marcotegui ◽  
F. Goulette

The currently existing mobile mapping systems equipped with active 3D sensors allow to acquire the environment with high sampling rates at high vehicle velocities. While providing an effective solution for environment sensing over large scale distances, such acquisition provides only a discrete representation of the geometry. Thus, a continuous map of the underlying surface must be built. Mobile acquisition introduces several constraints for the state-of-the-art surface reconstruction algorithms. Smoothing becomes a difficult task for recovering sharp depth features while avoiding mesh shrinkage. In addition, interpolation-based techniques are not suitable for noisy datasets acquired by Mobile Laser Scanning (MLS) systems. Furthermore, scalability is a major concern for enabling real-time rendering over large scale distances while preserving geometric details. This paper presents a fully automatic ground surface reconstruction framework capable to deal with the aforementioned constraints. The proposed method exploits the quasi-flat geometry of the ground throughout a morphological segmentation algorithm. Then, a planar Delaunay triangulation is applied in order to reconstruct the ground surface. A smoothing procedure eliminates high frequency peaks, while preserving geometric details in order to provide a regular ground surface. Finally, a decimation step is applied in order to cope with scalability constraints over large scale distances. Experimental results on real data acquired in large urban environments are presented and a performance evaluation with respect to ground truth measurements demonstrate the effectiveness of our method.


2020 ◽  
Vol 36 (12) ◽  
pp. 3949-3950
Author(s):  
Illia Ziamtsov ◽  
Saket Navlakha

Abstract Motivation Developing methods to efficiently analyze 3D point cloud data of plant architectures remain challenging for many phenotyping applications. Here, we describe a tool that tackles four core phenotyping tasks: classification of cloud points into stem and lamina points, graph skeletonization of the stem points, segmentation of individual lamina and whole leaf labeling. These four tasks are critical for numerous downstream phenotyping goals, such as quantifying plant biomass, performing morphological analyses of plant shapes and uncovering genotype to phenotype relationships. The Plant 3D tool provides an intuitive graphical user interface, a fast 3D rendering engine for visualizing plants with millions of cloud points, and several graph-theoretic and machine-learning algorithms for 3D architecture analyses. Availability and implementation P3D is open-source and implemented in C++. Source code and Windows installer are freely available at https://github.com/iziamtso/P3D/. Contact [email protected] or [email protected] Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document