scholarly journals Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation

2020 ◽  
Vol 12 (11) ◽  
pp. 1870 ◽  
Author(s):  
Qingqing Li ◽  
Paavo Nevalainen ◽  
Jorge Peña Queralta ◽  
Jukka Heikkonen ◽  
Tomi Westerlund

Autonomous harvesting and transportation is a long-term goal of the forest industry. One of the main challenges is the accurate localization of both vehicles and trees in a forest. Forests are unstructured environments where it is difficult to find a group of significant landmarks for current fast feature-based place recognition algorithms. This paper proposes a novel approach where local point clouds are matched to a global tree map using the Delaunay triangularization as the representation format. Instead of point cloud based matching methods, we utilize a topology-based method. First, tree trunk positions are registered at a prior run done by a forest harvester. Second, the resulting map is Delaunay triangularized. Third, a local submap of the autonomous robot is registered, triangularized and matched using triangular similarity maximization to estimate the position of the robot. We test our method on a dataset accumulated from a forestry site at Lieksa, Finland. A total length of 200 m of harvester path was recorded by an industrial harvester with a 3D laser scanner and a geolocation unit fixed to the frame. Our experiments show a 12 cm s.t.d. in the location accuracy and with real-time data processing for speeds not exceeding 0.5 m/s. The accuracy and speed limit are realistic during forest operations.

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


Author(s):  
Jianqing Wu ◽  
Hao Xu ◽  
Yuan Sun ◽  
Jianying Zheng ◽  
Rui Yue

The high-resolution micro traffic data (HRMTD) of all roadway users is important for serving the connected-vehicle system in mixed traffic situations. The roadside LiDAR sensor gives a solution to providing HRMTD from real-time 3D point clouds of its scanned objects. Background filtering is the preprocessing step to obtain the HRMTD of different roadway users from roadside LiDAR data. It can significantly reduce the data processing time and improve the vehicle/pedestrian identification accuracy. An algorithm is proposed in this paper, based on the spatial distribution of laser points, which filters both static and moving background efficiently. Various thresholds of point density are applied in this algorithm to exclude background at different distances from the roadside sensor. The case study shows that the algorithm can filter background LiDAR points in different situations (different road geometries, different traffic demands, day/night time, different speed limits). Vehicle and pedestrian shape can be retained well after background filtering. The low computational load guarantees this method can be applied for real-time data processing such as vehicle monitoring and pedestrian tracking.


2020 ◽  
Vol 10 (21) ◽  
pp. 7652
Author(s):  
Ľudovít Kovanič ◽  
Peter Blistan ◽  
Rudolf Urban ◽  
Martin Štroner ◽  
Katarína Pukanská ◽  
...  

This research focused on determining a rotary kiln’s geometric parameters in a non-traditional geodetic way—by deriving them from a survey realized by a terrestrial laser scanner (TLS). The point cloud obtained by TLS measurement was processed to derive the longitudinal axis of the RK. Subsequently, the carrier tires’ geometric parameters and shell of the RK during the shutdown were derived. Manual point cloud selection (segmentation) is the base method for removing unnecessary points. This method is slow but precise and controllable. The proposed analytical solution is based on calculating the distance from each point to the RK’s nominal axis (local radius). Iteration using a histogram function was repeatedly applied to detect points with the same or similar radiuses. The most numerous intervals of points were selected and stored in separate files. In the comparison, we present the conformity of analytically and manually obtained files and derived geometric values of the RK-radiuses’ spatial parameters and coordinates of the carrier tires’ centers. The horizontal (X and Y directions) and vertical (Z-direction) of root–mean–square deviation (RMSD) values are up to 2 mm. RMSD of the fitting of cylinders is also up to 2 mm. The center of the carrier tires defines the longitudinal axis of the RK. Analytical segmentation of the points was repeated on the remaining point cloud for the selection of the points on the outer shell of the RK. Deformation analysis of the shell of the RK was performed using a cylinder with a nominal radius. Manually and analytically processed point clouds were investigated and mutually compared. The calculated RMSD value is up to 2 mm. Parallel cuts situated perpendicularly to the axis of the RK were created. Analysis of ovality (flattening) of the shell was performed. Additionally, we also present the effect of gradually decreasing density (number) of points on the carrier tires for their center derivation.


Author(s):  
Beril Sirmacek ◽  
Yueqian Shen ◽  
Roderik Lindenbergh ◽  
Sisi Zlatanova ◽  
Abdoulaye Diakite

We present a comparison of point cloud generation and quality of data acquired by Zebedee (Zeb1) and Leica C10 devices which are used in the same building interior. Both sensor devices come with different practical and technical advantages. As it could be expected, these advantages come with some drawbacks. Therefore, depending on the requirements of the project, it is important to have a vision about what to expect from different sensors. In this paper, we provide a detailed analysis of the point clouds of the same room interior acquired from Zeb1 and Leica C10 sensors. First, it is visually assessed how different features appear in both the Zeb1 and Leica C10 point clouds. Next, a quantitative analysis is given by comparing local point density, local noise level and stability of local normals. Finally, a simple 3D room plan is extracted from both the Zeb1 and the Leica C10 point clouds and the lengths of constructed line segments connecting corners of the room are compared. The results show that Zeb1 is far superior in ease of data acquisition. No heavy handling, hardly no measurement planning and no point cloud registration is required from the operator. The resulting point cloud has a quality in the order of centimeters, which is fine for generating a 3D interior model of a building. Our results also clearly show that fine details of for example ornaments are invisible in the Zeb1 data. If point clouds with a quality in the order of millimeters are required, still a high-end laser scanner like the Leica C10 is required, in combination with a more sophisticated, time-consuming and elaborative data acquisition and processing approach.


Author(s):  
Beril Sirmacek ◽  
Yueqian Shen ◽  
Roderik Lindenbergh ◽  
Sisi Zlatanova ◽  
Abdoulaye Diakite

We present a comparison of point cloud generation and quality of data acquired by Zebedee (Zeb1) and Leica C10 devices which are used in the same building interior. Both sensor devices come with different practical and technical advantages. As it could be expected, these advantages come with some drawbacks. Therefore, depending on the requirements of the project, it is important to have a vision about what to expect from different sensors. In this paper, we provide a detailed analysis of the point clouds of the same room interior acquired from Zeb1 and Leica C10 sensors. First, it is visually assessed how different features appear in both the Zeb1 and Leica C10 point clouds. Next, a quantitative analysis is given by comparing local point density, local noise level and stability of local normals. Finally, a simple 3D room plan is extracted from both the Zeb1 and the Leica C10 point clouds and the lengths of constructed line segments connecting corners of the room are compared. The results show that Zeb1 is far superior in ease of data acquisition. No heavy handling, hardly no measurement planning and no point cloud registration is required from the operator. The resulting point cloud has a quality in the order of centimeters, which is fine for generating a 3D interior model of a building. Our results also clearly show that fine details of for example ornaments are invisible in the Zeb1 data. If point clouds with a quality in the order of millimeters are required, still a high-end laser scanner like the Leica C10 is required, in combination with a more sophisticated, time-consuming and elaborative data acquisition and processing approach.


2019 ◽  
Vol 8 (12) ◽  
pp. 585 ◽  
Author(s):  
Zahra Hadavandsiri ◽  
Derek D. Lichti ◽  
Adam Jahraus ◽  
David Jarron

This paper presents a novel approach for automatic, preliminary detection of damage in concrete structures using ground-based terrestrial laser scanners. The method is based on computation of defect-sensitive features such as the surface curvature, since the surface roughness changes strongly if an area is affected by damage. A robust version of principal component analysis (PCA) classification is proposed to distinguish between structural damage and outliers present in the laser scanning data. Numerical simulations were conducted to develop a systematic point-wise defect classifier that automatically diagnoses the location of superficial damage on the investigated region. The method provides a complete picture of the surface health of concrete structures. It has been tested on two real datasets: a concrete heritage aqueduct in Brooks, Alberta, Canada; and a civil pedestrian concrete structure. The experiment results demonstrate the validity and accuracy of the proposed systematic framework for detecting and localizing areas of damage as small as 1 cm or less.


Author(s):  
S. Schraml ◽  
T. Hinterhofer ◽  
M. Pfennigbauer ◽  
M. Hofstätter

<p><strong>Abstract.</strong> In this work we propose an effective radiation source localization device employing a RIEGL VUX-1UAV laser scanner and a highly sensitive Hotzone Technologies gamma radiation probe mounted on a RiCOPTER UAV combined with real-time data processing. The on-board processing and radio communication system integrated within the UAV enables instant and continuously updated access to georeferenced 3D lidar point clouds and gamma radiation intensities. Further processing is done fully automated on the ground. We present a novel combination of both the 3D laser data and the gamma readings within an optimization algorithm that can locate the radioactive source in real-time. Furthermore, this technique can be used to estimate an on-ground radiation intensity, which also considers the actual topography as well as radiation barriers like vegetation or buildings. Results from field tests with real radioactive sources show that single sources can be located precisely, even if the source was largely covered. Outcomes are displayed to the person in charge in an intuitive and user-friendly way, e.g. on a tablet. The whole system is designed to operate in real-time and while the UAV is in the air, resulting in a highly flexible and possibly life-saving asset for firstresponders in time-critical scenarios.</p>


Author(s):  
M. Peter ◽  
S. R. U. N. Jafri ◽  
G. Vosselman

Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of <i>n</i> points with respect to the line is <i>σ</i>&amp;thinsp;/&amp;thinsp;&amp;radic;<i>n</i>. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.


2020 ◽  
Vol 08 (03) ◽  
pp. 229-237
Author(s):  
Qingqing Li ◽  
Jorge Peña Queralta ◽  
Tuan Nguyen Gia ◽  
Zhuo Zou ◽  
Tomi Westerlund

The combination of data from multiple sensors, also known as sensor fusion or data fusion, is a key aspect in the design of autonomous robots. In particular, algorithms able to accommodate sensor fusion techniques enable increased accuracy, and are more resilient against the malfunction of individual sensors. The development of algorithms for autonomous navigation, mapping and localization have seen big advancements over the past two decades. Nonetheless, challenges remain in developing robust solutions for accurate localization in dense urban environments, where the so-called last-mile delivery occurs. In these scenarios, local motion estimation is combined with the matching of real-time data with a detailed pre-built map. In this paper, we utilize data gathered with an autonomous delivery robot to compare different sensor fusion techniques and evaluate which are the algorithms providing the highest accuracy depending on the environment. The techniques we analyze and propose in this paper utilize 3D lidar data, inertial data, GNSS data and wheel encoder readings. We show how lidar scan matching combined with other sensor data can be used to increase the accuracy of the robot localization and, in consequence, its navigation. Moreover, we propose a strategy to reduce the impact on navigation performance when a change in the environment renders map data invalid or part of the available map is corrupted.


Agronomy ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 11
Author(s):  
Christyan Cruz Ulloa ◽  
Anne Krus ◽  
Antonio Barrientos ◽  
Jaime Del Cerro ◽  
Constantino Valero

The use of robotic systems in organic farming has taken on a leading role in recent years; the Sureveg CORE Organic Cofund ERA-Net project seeks to evaluate the benefits of strip-cropping to produce organic vegetables. This includes, among other objectives, the development of a robotic tool that facilitates the automation of the fertilisation process, allowing the individual treatment (at the plant level). In organic production, the slower nutrient release of the used fertilisers poses additional difficulties, as a tardy detection of deficiencies can no longer be corrected. To improve the detection, as well as counter the additional labour stemming from the strip-cropping configuration, an integrated robotic tool is proposed to detect individual crop deficiencies and react on a single-crop basis. For the development of this proof-of-concept, one of the main objectives of this work is implementing a robust localisation method within the vegetative environment based on point clouds, through the generation of general point cloud maps (G-PC) and local point cloud maps (L-PC) of a crop row. The plants’ geometric characteristics were extracted from the G-PC as a framework in which the robot’s positioning is defined. Through the processing of real-time lidar data, the L-PC is then defined and compared to the predefined reference system previously deduced. Both subsystems are integrated with ROS (Robot Operating System), alongside motion planning, and an inverse kinematics CCD (Cyclic Coordinate Descent) solver, among others. Tests were performed using a simulated environment of the crop row developed in Gazebo, followed by actual measurements in a strip-cropping field. During real-time data-acquisition, the localisation error is reduced from 13 mm to 11 mm within the first 120 cm of measurement. The encountered real-time geometric characteristics were found to coincide with those in the G-PC to an extend of 98.6%.


Sign in / Sign up

Export Citation Format

Share Document