scholarly journals A Deformable Configuration Planning Framework for a Parallel Wheel-Legged Robot Equipped with Lidar

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5614
Author(s):  
Fei Guo ◽  
Shoukun Wang ◽  
Binkai Yue ◽  
Junzheng Wang

The wheel-legged hybrid robot (WLHR) is capable of adapting height and wheelbase configuration to traverse obstacles or rolling in confined space. Compared with legged and wheeled machines, it can be applied for more challenging mobile robotic exercises using the enhanced environment adapting performance. To make full use of the deformability and traversability of WHLR with parallel Stewart mechanism, this paper presents an optimization-driven planning framework for WHLR with parallel Stewart mechanism by abstracting the robot as a deformable bounding box. It will improve the obstacle negotiation ability of the high degree-of-freedoms robot, resulting in a shorter path through adjusting wheelbase of support polygon or trunk height instead of using a fixed configuration for wheeled robots. In the planning framework, we firstly proposed a pre-calculated signed distance field (SDF) mapping method based on point cloud data collected from a lidar sensor and a KD -tree-based point cloud fusion approach. Then, a covariant gradient optimization method is presented, which generates smooth, deformable-configuration, as well as collision-free trajectories in confined narrow spaces. Finally, with the user-defined driving velocity and position as motion inputs, obstacle-avoidancing actions including expanding or shrinking foothold polygon and lifting trunk were effectively testified in realistic conditions, demonstrating the practicability of our methodology. We analyzed the success rate of proposed framework in four different terrain scenarios through deforming configuration rather than bypassing obstacles.

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4252
Author(s):  
Chenchen Gu ◽  
Changyuan Zhai ◽  
Xiu Wang ◽  
Songlin Wang

Canopy characterization detection is essential for target-oriented spray, which minimizes pesticide residues in fruits, pesticide wastage, and pollution. In this study, a novel canopy meshing-profile characterization (CMPC) method based on light detection and ranging (LiDAR)point-cloud data was designed for high-precision canopy volume calculations. First, the accuracy and viability of this method were tested using a simulated canopy. The results show that the CMPC method can accurately characterize the 3D profiles of the simulated canopy. These simulated canopy profiles were similar to those obtained from manual measurements, and the measured canopy volume achieved an accuracy of 93.3%. Second, the feasibility of the method was verified by a field experiment where the canopy 3D stereogram and cross-sectional profiles were obtained via CMPC. The results show that the 3D stereogram exhibited a high degree of similarity with the tree canopy, although there were some differences at the edges, where the canopy was sparse. The CMPC-derived cross-sectional profiles matched the manually measured results well. The CMPC method achieved an accuracy of 96.3% when the tree canopy was detected by LiDAR at a moving speed of 1.2 m/s. The accuracy of the LiDAR system was virtually unchanged when the moving speeds was reduced to 1 m/s. No detection lag was observed when comparing the start and end positions of the cross-section. Different CMPC grid sizes were also evaluated. Small grid sizes (0.01 m × 0.01 m and 0.025 m × 0.025 m) were suitable for characterizing the finer details of a canopy, whereas grid sizes of 0.1 m × 0.1 m or larger can be used for characterizing its overall profile and volume. The results of this study can be used as a technical reference for the development of a LiDAR-based target-oriented spray system.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

2019 ◽  
Author(s):  
Byeongjun Oh ◽  
Minju Kim ◽  
Chanwoo Lee ◽  
Hunhee Cho ◽  
Kyung-In Kang

Sign in / Sign up

Export Citation Format

Share Document