Research on Landscape Environment with 3D-Reconstruction and Volume Measurement of Fruit Tree Canopy Based on Kinect

2013 ◽  
Vol 788 ◽  
pp. 480-485 ◽  
Author(s):  
Jun Mei Chen ◽  
Xiong Liu ◽  
Zuo Xi Zhao ◽  
Wei Tao Zhong

With the demand for precision management of orchards, 3-D reconstruction of fruit tree canopy is receiving more attention. Using depth image data from Kinect sensor, this article attempts to find the 3-D coordinates of the sensed point on the canopy to get the reconstructed fruit tree canopy quickly. By slicing the point clouds to find its surface-line features and the outer envelope within the same slices is used to reconstruct the fruit tree canopy and calculate the canopy’s volume. Tests show that the measuring error for regular cuboid’s volume is about 4.2% and the repeated measuring error for citrus tree’s volume about 6.9%, indicating that applying Kinect for measuring volume of fruit tree canopy has reasonably high accuracy and reliability. The Kinect, as well as this technique, may well be used in landscape measurement on monitering.

Agronomy ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. 741 ◽  
Author(s):  
Haihui Yang ◽  
Xiaochan Wang ◽  
Guoxiang Sun

Perception of the fruit tree canopy is a vital technology for the intelligent control of a modern standardized orchard. Due to the complex three-dimensional (3D) structure of the fruit tree canopy, morphological parameters extracted from two-dimensional (2D) or single-perspective 3D images are not comprehensive enough. Three-dimensional information from different perspectives must be combined in order to perceive the canopy information efficiently and accurately in complex orchard field environment. The algorithms used for the registration and fusion of data from different perspectives and the subsequent extraction of fruit tree canopy related parameters are the keys to the problem. This study proposed a 3D morphological measurement method for a fruit tree canopy based on Kinect sensor self-calibration, including 3D point cloud generation, point cloud registration and canopy information extraction of apple tree canopy. Using 32 apple trees (Yanfu 3 variety) morphological parameters of the height (H), maximum canopy width (W) and canopy thickness (D) were calculated. The accuracy and applicability of this method for extraction of morphological parameters were statistically analyzed. The results showed that, on both sides of the fruit trees, the average relative error (ARE) values of the morphological parameters including the fruit tree height (H), maximum tree width (W) and canopy thickness (D) between the calculated values and measured values were 3.8%, 12.7% and 5.0%, respectively, under the V1 mode; the ARE values under the V2 mode were 3.3%, 9.5% and 4.9%, respectively; and the ARE values under the V1 and V2 merged mode were 2.5%, 3.6% and 3.2%, respectively. The measurement accuracy of the tree width (W) under the double visual angle mode had a significant advantage over that under the single visual angle mode. The 3D point cloud reconstruction method based on Kinect self-calibration proposed in this study has high precision and stable performance, and the auxiliary calibration objects are readily portable and easy to install. It can be applied to different experimental scenes to extract 3D information of fruit tree canopies and has important implications to achieve the intelligent control of standardized orchards.


2013 ◽  
Vol 760-762 ◽  
pp. 1556-1561
Author(s):  
Ting Wei Du ◽  
Bo Liu

Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.


Author(s):  
Shenglian lu ◽  
Guo Li ◽  
Jian Wang

Tree skeleton could be useful to agronomy researchers because the skeleton describes the shape and topological structure of a tree. The phenomenon of organs’ mutual occlusion in fruit tree canopy is usually very serious, this should result in a large amount of data missing in directed laser scanning 3D point clouds from a fruit tree. However, traditional approaches can be ineffective and problematic in extracting the tree skeleton correctly when the tree point clouds contain occlusions and missing points. To overcome this limitation, we present a method for accurate and fast extracting the skeleton of fruit tree from laser scanner measured 3D point clouds. The proposed method selects the start point and endpoint of a branch from the point clouds by user’s manual interaction, then a backward searching is used to find a path from the 3D point cloud with a radius parameter as a restriction. The experimental results in several kinds of fruit trees demonstrate that our method can extract the skeleton of a leafy fruit tree with highly accuracy.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1581
Author(s):  
Xiaolong Chen ◽  
Jian Li ◽  
Shuowen Huang ◽  
Hao Cui ◽  
Peirong Liu ◽  
...  

Cracks are one of the main distresses that occur on concrete surfaces. Traditional methods for detecting cracks based on two-dimensional (2D) images can be hampered by stains, shadows, and other artifacts, while various three-dimensional (3D) crack-detection techniques, using point clouds, are less affected in this regard but are limited by the measurement accuracy of the 3D laser scanner. In this study, we propose an automatic crack-detection method that fuses 3D point clouds and 2D images based on an improved Otsu algorithm, which consists of the following four major procedures. First, a high-precision registration of a depth image projected from 3D point clouds and 2D images is performed. Second, pixel-level image fusion is performed, which fuses the depth and gray information. Third, a rough crack image is obtained from the fusion image using the improved Otsu method. Finally, the connected domain labeling and morphological methods are used to finely extract the cracks. Experimentally, the proposed method was tested at multiple scales and with various types of concrete crack. The results demonstrate that the proposed method can achieve an average precision of 89.0%, recall of 84.8%, and F1 score of 86.7%, performing significantly better than the single image (average F1 score of 67.6%) and single point cloud (average F1 score of 76.0%) methods. Accordingly, the proposed method has high detection accuracy and universality, indicating its wide potential application as an automatic method for concrete-crack detection.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3848
Author(s):  
Xinyue Zhang ◽  
Gang Liu ◽  
Ling Jing ◽  
Siyao Chen

The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.


2018 ◽  
pp. 339-346
Author(s):  
C. Seehuber ◽  
L. Damerow ◽  
A. Solomakhin ◽  
M.M. Blanke

2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5670
Author(s):  
Hanwen Kang ◽  
Hongyu Zhou ◽  
Xing Wang ◽  
Chao Chen

Robotic harvesting shows a promising aspect in future development of agricultural industry. However, there are many challenges which are still presented in the development of a fully functional robotic harvesting system. Vision is one of the most important keys among these challenges. Traditional vision methods always suffer from defects in accuracy, robustness, and efficiency in real implementation environments. In this work, a fully deep learning-based vision method for autonomous apple harvesting is developed and evaluated. The developed method includes a light-weight one-stage detection and segmentation network for fruit recognition and a PointNet to process the point clouds and estimate a proper approach pose for each fruit before grasping. Fruit recognition network takes raw inputs from RGB-D camera and performs fruit detection and instance segmentation on RGB images. The PointNet grasping network combines depth information and results from the fruit recognition as input and outputs the approach pose of each fruit for robotic arm execution. The developed vision method is evaluated on RGB-D image data which are collected from both laboratory and orchard environments. Robotic harvesting experiments in both indoor and outdoor conditions are also included to validate the performance of the developed harvesting system. Experimental results show that the developed vision method can perform highly efficient and accurate to guide robotic harvesting. Overall, the developed robotic harvesting system achieves 0.8 on harvesting success rate and cycle time is 6.5 s.


Geosciences ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 117 ◽  
Author(s):  
František Chudý ◽  
Martina Slámová ◽  
Julián Tomaštík ◽  
Roberta Prokešová ◽  
Martin Mokroš

An active gully-related landslide system is located in a deep valley under forest canopy cover. Generally, point clouds from forested areas have a lack of data connectivity, and optical parameters of scanning cameras lead to different densities of point clouds. Data noise or systematic errors (missing data) make the automatic identification of landforms under tree canopy problematic or impossible. We processed, analyzed, and interpreted data from a large-scale landslide survey, which were acquired by the light detection and ranging (LiDAR) technology, remotely piloted aircraft system (RPAS), and close-range photogrammetry (CRP) using the ‘Structure-from-Motion’ (SfM) method. LAStools is a highly efficient Geographic Information System (GIS) tool for point clouds pre-processing and creating precise digital elevation models (DEMs). The main landslide body and its landforms indicating the landslide activity were detected and delineated in DEM-derivatives. Identification of micro-scale landforms in precise DEMs at large scales allow the monitoring and the assessment of these active parts of landslides that are invisible in digital terrain models at smaller scales (obtained from aerial LiDAR or from RPAS) due to insufficient data density or the presence of many data gaps.


Sign in / Sign up

Export Citation Format

Share Document