scholarly journals Mobile LiDAR Scanning System Combined with Canopy Morphology Extracting Methods for Tree Crown Parameters Evaluation in Orchards

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 339
Author(s):  
Kai Wang ◽  
Jun Zhou ◽  
Wenhai Zhang ◽  
Baohua Zhang

To meet the demand for canopy morphological parameter measurements in orchards, a mobile scanning system is designed based on the 3D Simultaneous Localization and Mapping (SLAM) algorithm. The system uses a lightweight LiDAR-Inertial Measurement Unit (LiDAR-IMU) state estimator and a rotation-constrained optimization algorithm to reconstruct a point cloud map of the orchard. Then, Statistical Outlier Removal (SOR) filtering and European clustering algorithms are used to segment the orchard point cloud from which the ground information has been separated, and the k-nearest neighbour (KNN) search algorithm is used to restore the filtered point cloud. Finally, the height of the fruit trees and the volume of the canopy are obtained by the point cloud statistical method and the 3D alpha-shape algorithm. To verify the algorithm, tracked robots equipped with LIDAR and an IMU are used in a standardized orchard. Experiments show that the system in this paper can reconstruct the orchard point cloud environment with high accuracy and can obtain the point cloud information of all fruit trees in the orchard environment. The accuracy of point cloud-based segmentation of fruit trees in the orchard is 95.4%. The R2 and Root Mean Square Error (RMSE) values of crown height are 0.93682 and 0.04337, respectively, and the corresponding values of canopy volume are 0.8406 and 1.5738, respectively. In summary, this system achieves a good evaluation result of orchard crown information and has important application value in the intelligent measurement of fruit trees.

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Ming Guo ◽  
Bingnan Yan ◽  
Tengfei Zhou ◽  
Deng Pan ◽  
Guoli Wang

To obtain high-precision measurement data using vehicle-borne light detection and ranging (LiDAR) scanning (VLS) systems, calibration is necessary before a data acquisition mission. Thus, a novel calibration method based on a homemade target ball is proposed to estimate the system mounting parameters, which refer to the rotational and translational offsets between the LiDAR sensor and inertial measurement unit (IMU) orientation and position. Firstly, the spherical point cloud is fitted into a sphere to extract the coordinates of the centre, and each scan line on the sphere is fitted into a section of the sphere to calculate the distance ratio from the centre to the nearest two sections, and the attitude and trajectory parameters of the centre are calculated by linear interpolation. Then, the real coordinates of the centre of the sphere are calculated by measuring the coordinates of the reflector directly above the target ball with the total station. Finally, three rotation parameters and three translation parameters are calculated by two least-squares adjustments. Comparisons of the point cloud before and after calibration and the calibrated point clouds with the point cloud obtained by the terrestrial laser scanner show that the accuracy significantly improved after calibration. The experiment indicates that the calibration method based on the homemade target ball can effectively improve the accuracy of the point cloud, which can promote VLS development and applications.


Author(s):  
Tianmiao Wang ◽  
Chaolei Wang ◽  
Jianhong Liang ◽  
Yicheng Zhang

Purpose – The purpose of this paper is to present a Rao–Blackwellized particle filter (RBPF) approach for the visual simultaneous localization and mapping (SLAM) of small unmanned aerial vehicles (UAVs). Design/methodology/approach – Measurements from inertial measurement unit, barometric altimeter and monocular camera are fused to estimate the state of the vehicle while building a feature map. In this SLAM framework, an extra factorization method is proposed to partition the vehicle model into subspaces as the internal and external states. The internal state is estimated by an extended Kalman filter (EKF). A particle filter is employed for the external state estimation and parallel EKFs are for the map management. Findings – Simulation results indicate that the proposed approach is more stable and accurate than other existing marginalized particle filter-based SLAM algorithms. Experiments are also carried out to verify the effectiveness of this SLAM method by comparing with a referential global positioning system/inertial navigation system. Originality/value – The main contribution of this paper is the theoretical derivation and experimental application of the Rao–Blackwellized visual SLAM algorithm with vehicle model partition for small UAVs.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2915 ◽  
Author(s):  
Zhuli Ren ◽  
Liguan Wang ◽  
Lin Bi

Unmanned mining is one of the most effective methods to solve mine safety and low efficiency. However, it is the key to accurate localization and mapping for underground mining environment. A novel graph simultaneous localization and mapping (SLAM) optimization method is proposed, which is based on Generalized Iterative Closest Point (GICP) three-dimensional (3D) point cloud registration between consecutive frames, between consecutive key frames and between loop frames, and is constrained by roadway plane and loop. GICP-based 3D point cloud registration between consecutive frames and consecutive key frames is first combined to optimize laser odometer constraints without other sensors such as inertial measurement unit (IMU). According to the characteristics of the roadway, the innovative extraction of the roadway plane as the node constraint of pose graph SLAM, in addition to automatic removing the noise point cloud to further improve the consistency of the underground roadway map. A lightweight and efficient loop detection and optimization based on rules and GICP is designed. Finally, the proposed method was evaluated in four scenes (such as the underground mine laboratory), and compared with the existing 3D laser SLAM method (such as Lidar Odometry and Mapping (LOAM)). The results show that the algorithm could realize low drift localization and point cloud map construction. This method provides technical support for localization and navigation of underground mining environment.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5419 ◽  
Author(s):  
Xiao Liu ◽  
Lei Zhang ◽  
Shengran Qin ◽  
Daji Tian ◽  
Shihan Ouyang ◽  
...  

Reducing the cumulative error in the process of simultaneous localization and mapping (SLAM) has always been a hot issue. In this paper, in order to improve the localization and mapping accuracy of ground vehicles, we proposed a novel optimized lidar odometry and mapping method using ground plane constraints and SegMatch-based loop detection. We only used the lidar point cloud to estimate the pose between consecutive frames, without any other sensors, such as Global Positioning System (GPS) and Inertial Measurement Unit (IMU). Firstly, the ground plane constraints were used to reduce matching errors. Then, based on more accurate lidar odometry obtained from lidar odometry and mapping (LOAM), SegMatch completed segmentation matching and loop detection to optimize the global pose. The neighborhood search was also used to accomplish the loop detection task in case of failure. Finally, the proposed method was evaluated and compared with the existing 3D lidar SLAM methods. Experiment results showed that the proposed method could realize low drift localization and dense 3D point cloud map construction.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5421
Author(s):  
Yang Li ◽  
Yutong Liu ◽  
Yanping Wang ◽  
Yun Lin ◽  
Wenjie Shen

Compared with the commonly used lidar and visual sensors, the millimeter-wave radar has all-day and all-weather performance advantages and more stable performance in the face of different scenarios. However, using the millimeter-wave radar as the Simultaneous Localization and Mapping (SLAM) sensor is also associated with other problems, such as small data volume, more outliers, and low precision, which reduce the accuracy of SLAM localization and mapping. This paper proposes a millimeter-wave radar SLAM assisted by the Radar Cross Section (RCS) feature of the target and Inertial Measurement Unit (IMU). Using IMU to combine continuous radar scanning point clouds into “Multi-scan,” the problem of small data volume is solved. The Density-based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm is used to filter outliers from radar data. In the clustering, the RCS feature of the target is considered, and the Mahalanobis distance is used to measure the similarity of the radar data. At the same time, in order to alleviate the problem of the lower accuracy of SLAM positioning due to the low precision of millimeter-wave radar data, an improved Correlative Scan Matching (CSM) method is proposed in this paper, which matches the radar point cloud with the local submap of the global grid map. It is a “Scan to Map” point cloud matching method, which achieves the tight coupling of localization and mapping. In this paper, three groups of actual data are collected to verify the proposed method in part and in general. Based on the comparison of the experimental results, it is proved that the proposed millimeter-wave radar SLAM assisted by the RCS feature of the target and IMU has better accuracy and robustness in the face of different scenarios.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5288 ◽  
Author(s):  
Yanli Liu ◽  
Heng Zhang ◽  
Chao Huang

In this paper, we present a novel red-green-blue-depth simultaneous localization and mapping (RGB-D SLAM) algorithm based on cloud robotics, which combines RGB-D SLAM with the cloud robot and offloads the back-end process of the RGB-D SLAM algorithm to the cloud. This paper analyzes the front and back parts of the original RGB-D SLAM algorithm and improves the algorithm from three aspects: feature extraction, point cloud registration, and pose optimization. Experiments show the superiority of the improved algorithm. In addition, taking advantage of the cloud robotics, the RGB-D SLAM algorithm is combined with the cloud robot and the back-end part of the computationally intensive algorithm is offloaded to the cloud. Experimental validation is provided, which compares the cloud robotic-based RGB-D SLAM algorithm with the local RGB-D SLAM algorithm. The results of the experiments demonstrate the superiority of our framework. The combination of cloud robotics and RGB-D SLAM can not only improve the efficiency of SLAM but also reduce the robot’s price and size.


Author(s):  
P. Shokrzadeh

Abstract. 3D representation of the environment is a piece of vital information for most of the engineering sciences. However, providing such information in classical surveying approaches demands a considerable amount of time for localizing the sensor in a desired coordinate frame to map the environment. Simultaneous Localization And Mapping (SLAM) algorithm is capable of localizing the sensor and do the mapping while the sensor is moving through the environment. In this paper, SLAM will be applied on the data of a lightweight 3D laser scanner in which we call semi-sparse point cloud, because of the unique specifications of the point cloud which comes from various resolutions in vertical and horizontal directions. In contrast to most of the SLAM algorithms, there is no aiding sensor to provide prior information of motion. The output of the algorithm would be a high-density full geometry detailed map in a short time. The accuracy of the algorithm has been estimated in a medium scale simulated outdoor environments in Gazebo and Robot Operating System (ROS). Considering Velodyne Puck accuracy which is 3 cm, the map was generated with approximately 6 cm accuracy.


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 235 ◽  
Author(s):  
SeungHwan Lee ◽  
HanJun Kim ◽  
BeomHee Lee

A novel and an efficient rescue system with a multi-agent simultaneous localization and mapping (SLAM) framework is proposed to reduce the rescue time while rescuing the people trapped inside a burning building. In this study, the truncated signed distance (TSD) based SLAM algorithm is employed to accurately construct a two-dimensional map of the surroundings. For a new and significantly different scenario, information is gathered and the general iterative closest point method (GICP) is directly employed instead of the conventional TSD-SLAM process. Rescuers can utilize a total map created by merging individual maps, allowing them to efficiently search for victims. For online map merging, it is essential to determine the timing of when the individual maps are merged and the extent to which one map reflects the other map, via the weights. In the several experiments conducted, a light-detection and ranging system and an inertial measurement unit were integrated into a smart helmet for rescuers. The results indicated that the map was built more accurately than that obtained using the conventional TSD-SLAM. Additionally, the merged map was built more correctly by determining proper parameters for online map merging. Consequently, the accurate merged map allows rescuers to search for victims efficiently.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4973 ◽  
Author(s):  
Dániel Kiss-Illés ◽  
Cristina Barrado ◽  
Esther Salamí

This work presents Global Positioning System-Simultaneous Localization and Mapping (GPS-SLAM), an augmented version of Oriented FAST (Features from accelerated segment test) and Rotated BRIEF (Binary Robust Independent Elementary Features) feature detector (ORB)-SLAM using GPS and inertial data to make the algorithm capable of dealing with low frame rate datasets. In general, SLAM systems are successful in case of datasets with a high frame rate. This work was motivated by a scarce dataset where ORB-SLAM often loses track because of the lack of continuity. The main work includes the determination of the next frame’s pose based on the GPS and inertial data. The results show that this additional information makes the algorithm more robust. As many large, outdoor unmanned aerial vehicle (UAV) flights save the GPS and inertial measurement unit (IMU) data of the capturing of images, this program gives an option to use the SLAM algorithm successfully even if the dataset has a low frame rate.


Robotica ◽  
2021 ◽  
pp. 1-26
Author(s):  
Lhilo Kenye ◽  
Rahul Kala

Summary Most conventional simultaneous localization and mapping (SLAM) approaches assume the working environment to be static. In a highly dynamic environment, this assumption divulges the impediments of a SLAM algorithm that lack modules that distinctively attend to dynamic objects despite the inclusion of optimization techniques. This work exploits such environments and reduces the effects of dynamic objects in a SLAM algorithm by separating features belonging to dynamic objects and static background using a generated binary mask image. While the features belonging to the static region are used for performing SLAM, the features belonging to non-static segments are reused instead of being eliminated. The approach employs deep neural network or DNN-based object detection module to obtain bounding boxes and then generates a lower resolution binary mask image using depth-first search algorithm over the detected semantics, characterizing the segmentation of the foreground from the static background. In addition, the features belonging to dynamic objects are tracked into consecutive frames to obtain better masking consistency. The proposed approach is tested on both publicly available dataset as well as self-collected dataset, which includes both indoor and outdoor environments. The experimental results show that the removal of features belonging to dynamic objects for a SLAM algorithm can significantly improve the overall output in a dynamic scene.


Sign in / Sign up

Export Citation Format

Share Document