Obstacle detection based on depth fusion of lidar and radar in challenging conditions

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guotao Xie ◽  
Jing Zhang ◽  
Junfeng Tang ◽  
Hongfei Zhao ◽  
Ning Sun ◽  
...  

Purpose To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection. Design/methodology/approach Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist. Findings The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions. Originality/value A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.

2019 ◽  
Vol 11 (4) ◽  
pp. 442 ◽  
Author(s):  
Zhen Li ◽  
Junxiang Tan ◽  
Hua Liu

Mobile LiDAR Scanning (MLS) systems and UAV LiDAR Scanning (ULS) systems equipped with precise Global Navigation Satellite System (GNSS)/Inertial Measurement Unit (IMU) positioning units and LiDAR sensors are used at an increasing rate for the acquisition of high density and high accuracy point clouds because of their safety and efficiency. Without careful calibration of the boresight angles of the MLS systems and ULS systems, the accuracy of data acquired would degrade severely. This paper proposes an automatic boresight self-calibration method for the MLS systems and ULS systems using acquired multi-strip point clouds. The boresight angles of MLS systems and ULS systems are expressed in the direct geo-referencing equation and corrected by minimizing the misalignments between points scanned from different directions and different strips. Two datasets scanned by MLS systems and two datasets scanned by ULS systems were used to verify the proposed boresight calibration method. The experimental results show that the root mean square errors (RMSE) of misalignments between point correspondences of the four datasets after boresight calibration are 2.1 cm, 3.4 cm, 5.4 cm, and 6.1 cm, respectively, which are reduced by 59.6%, 75.4%, 78.0%, and 94.8% compared with those before boresight calibration.


Author(s):  
Ahmed Joubair ◽  
Long Fei Zhao ◽  
Pascal Bigras ◽  
Ilian Bonev

Purpose – The purpose of this paper is to describe a calibration method developed to improve the accuracy of a six degrees-of-freedom medical robot. The proposed calibration approach aims to enhance the robot’s accuracy in a specific target workspace. A comparison of five observability indices is also done to choose the most appropriate calibration robot configurations. Design/methodology/approach – The calibration method is based on the forward kinematic approach, which uses a nonlinear optimization model. The used experimental data are 84 end-effector positions, which are measured using a laser tracker. The calibration configurations are chosen through an observability analysis, while the validation after calibration is carried out in 336 positions within the target workspace. Findings – Simulations allowed finding the most appropriate observability index for choosing the optimal calibration configurations. They also showed the ability of our calibration model to identify most of the considered robot’s parameters, despite measurement errors. Experimental tests confirmed the simulation findings and showed that the robot’s mean position error is reduced from 3.992 mm before calibration to 0.387 mm after, and the maximum error is reduced from 5.957 to 0.851 mm. Originality/value – This paper presents a calibration method which makes it possible to accurately identify the kinematic errors for a novel medical robot. In addition, this paper presents a comparison between the five observability indices proposed in the literature. The proposed method might be applied to any industrial or medical robot similar to the robot studied in this paper.


2022 ◽  
pp. 1-20
Author(s):  
Shiyu Bai ◽  
Jizhou Lai ◽  
Pin Lyu ◽  
Yiting Cen ◽  
Bingqing Wang ◽  
...  

Determination of calibration parameters is essential for the fusion performance of an inertial measurement unit (IMU) and odometer integrated navigation system. Traditional calibration methods are commonly based on the filter frame, which limits the improvement of the calibration accuracy. This paper proposes a graph-optimisation-based self-calibration method for the IMU/odometer using preintegration theory. Different from existing preintegrations, the complete IMU/odometer preintegration model is derived, which takes into consideration the effects of the scale factor of the odometer, and misalignments in the attitude and position between the IMU and odometer. Then the calibration is implemented by the graph-optimisation method. The KITTI dataset and field experimental tests are carried out to evaluate the effectiveness of the proposed method. The results illustrate that the proposed method outperforms the filter-based calibration method. Meanwhile, the performance of the proposed IMU/odometer preintegration model is optimal compared with the traditional preintegration models.


Author(s):  
T. Medić ◽  
H. Kuhlmann ◽  
C. Holst

<p><strong>Abstract.</strong> Terrestrial laser scanner (TLS) measurements are unavoidably affected by systematic influences due to internal misalignments. The magnitude of the resulting errors can exceed the magnitude of random errors significantly deteriorating the quality of the obtained point clouds. Hence, the task of calibrating TLSs is important for applications with high demands regarding accuracy. In recent years, multiple in-situ self-calibration approaches were derived allowing the successful estimation of up-to-date calibration parameters. These approaches rely either on using manually placed targets or on using man-made geometric objects found in surroundings. Herein, we widen the existing toolbox with an alternative approach for panoramic TLSs, for the cases where such prerequisites cannot be met. We build upon the existing target-based two-face calibration method by substituting targets with precisely localized 2D keypoints, i.e. local features, detected in panoramic intensity images using the Förstner operator. To overcome the detriment of the perspective change on the feature localization accuracy, we estimate the majority of the relevant calibration parameters from a single station. The approach is verified on real data obtained with the Leica ScanStation P20. The obtained results were tested against the affirmed target-based two-face self-calibration. Analysis proved that the estimated calibration parameters are directly comparable both in the terms of parameter precision and correlation. In the end, we employ an effective evaluation procedure for testing the impact of the calibration results on the point cloud quality.</p>


Measurement ◽  
2021 ◽  
Vol 174 ◽  
pp. 109067
Author(s):  
Zhi-Feng Lou ◽  
Li Liu ◽  
Ji-Yun Zhang ◽  
Kuang-chao Fan ◽  
Xiao-Dong Wang

Author(s):  
Suyong Yeon ◽  
ChangHyun Jun ◽  
Hyunga Choi ◽  
Jaehyeon Kang ◽  
Youngmok Yun ◽  
...  

Purpose – The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data. Design/methodology/approach – The proposed method utilizes a divide-and-conquer step to efficiently handle huge amounts of point clouds not in a whole group, but in forms of separate sub-groups with similar plane parameters. This method adopts robust principal component analysis to enhance estimation accuracy. Findings – Experimental results verify that the method not only shows enhanced performance in the plane extraction, but also broadens the domain of interest of the plane registration to an information-poor environment (such as simple indoor corridors), while the previous method only adequately works in an information-rich environment (such as a space with many features). Originality/value – The proposed algorithm has three advantages over the current state-of-the-art method in that it is fast, utilizes more inlier sensor data that does not become contaminated by severe sensor noise and extracts more accurate plane parameters.


Author(s):  
P.M.B. Torres ◽  
P. J. S. Gonçalves ◽  
J.M.M. Martins

Purpose – The purpose of this paper is to present a robotic motion compensation system, using ultrasound images, to assist orthopedic surgery. The robotic system can compensate for femur movements during bone drilling procedures. Although it may have other applications, the system was thought to be used in hip resurfacing (HR) prosthesis surgery to implant the initial guide tool. The system requires no fiducial markers implanted in the patient, by using only non-invasive ultrasound images. Design/methodology/approach – The femur location in the operating room is obtained by processing ultrasound (USA) and computer tomography (CT) images, obtained, respectively, in the intra-operative and pre-operative scenarios. During surgery, the bone position and orientation is obtained by registration of USA and CT three-dimensional (3D) point clouds, using an optical measurement system and also passive markers attached to the USA probe and to the drill. The system description, image processing, calibration procedures and results with simulated and real experiments are presented and described to illustrate the system in operation. Findings – The robotic system can compensate for femur movements, during bone drilling procedures. In most experiments, the update was always validated, with errors of 2 mm/4°. Originality/value – The navigation system is based entirely on the information extracted from images obtained from CT pre-operatively and USA intra-operatively. Contrary to current surgical systems, it does not use any type of implant in the bone to track the femur movements.


Sign in / Sign up

Export Citation Format

Share Document