Enhancing Robot Capabilities of Environmental Perception through Embedded GPU

Author(s):  
Marco Antonio Simoes Teixeira ◽  
Nicolas Dalmedico ◽  
Higor Barbosa Santos ◽  
Andre Schneider De Oliveira ◽  
Lucia Valeria Ramos De Arruda ◽  
...  
2021 ◽  
Vol 13 (6) ◽  
pp. 1064
Author(s):  
Zhangjing Wang ◽  
Xianhan Miao ◽  
Zhen Huang ◽  
Haoran Luo

The development of autonomous vehicles and unmanned aerial vehicles has led to a current research focus on improving the environmental perception of automation equipment. The unmanned platform detects its surroundings and then makes a decision based on environmental information. The major challenge of environmental perception is to detect and classify objects precisely; thus, it is necessary to perform fusion of different heterogeneous data to achieve complementary advantages. In this paper, a robust object detection and classification algorithm based on millimeter-wave (MMW) radar and camera fusion is proposed. The corresponding regions of interest (ROIs) are accurately calculated from the approximate position of the target detected by radar and cameras. A joint classification network is used to extract micro-Doppler features from the time-frequency spectrum and texture features from images in the ROIs. A fusion dataset between radar and camera is established using a fusion data acquisition platform and includes intersections, highways, roads, and playgrounds in schools during the day and at night. The traditional radar signal algorithm, the Faster R-CNN model and our proposed fusion network model, called RCF-Faster R-CNN, are evaluated in this dataset. The experimental results indicate that the mAP(mean Average Precision) of our network is up to 89.42% more accurate than the traditional radar signal algorithm and up to 32.76% higher than Faster R-CNN, especially in the environment of low light and strong electromagnetic clutter.


2021 ◽  
Vol 1950 (1) ◽  
pp. 012040
Author(s):  
R.V Shynu ◽  
K.G Santhosh Kumar ◽  
R.D Sambath

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guotao Xie ◽  
Jing Zhang ◽  
Junfeng Tang ◽  
Hongfei Zhao ◽  
Ning Sun ◽  
...  

Purpose To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection. Design/methodology/approach Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist. Findings The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions. Originality/value A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.


PLoS ONE ◽  
2013 ◽  
Vol 8 (4) ◽  
pp. e59690 ◽  
Author(s):  
Russell E. Jackson ◽  
Chéla R. Willey ◽  
Lawrence K. Cormack

Sign in / Sign up

Export Citation Format

Share Document