A KNN Query Method for Autonomous Driving Sensor Data

Author(s):  
Tang Jie ◽  
Zhang Jiehui ◽  
Zeng Zhixin ◽  
Liu Shaoshan
2020 ◽  
Vol 10 (18) ◽  
pp. 6317 ◽  
Author(s):  
Wilfried Wöber ◽  
Georg Novotny ◽  
Lars Mehnen ◽  
Cristina Olaverri-Monreal

On-board sensory systems in autonomous vehicles make it possible to acquire information about the vehicle itself and about its relevant surroundings. With this information the vehicle actuators are able to follow the corresponding control commands and behave accordingly. Localization is thus a critical feature in autonomous driving to define trajectories to follow and enable maneuvers. Localization approaches using sensor data are mainly based on Bayes filters. Whitebox models that are used to this end use kinematics and vehicle parameters, such as wheel radii, to interfere the vehicle’s movement. As a consequence, faulty vehicle parameters lead to poor localization results. On the other hand, blackbox models use motion data to model vehicle behavior without relying on vehicle parameters. Due to their high non-linearity, blackbox approaches outperform whitebox models but faulty behaviour such as overfitting is hardly identifiable without intensive experiments. In this paper, we extend blackbox models using kinematics, by inferring vehicle parameters and then transforming blackbox models into whitebox models. The probabilistic perspective of vehicle movement is extended using random variables representing vehicle parameters. We validated our approach, acquiring and analyzing simulated noisy movement data from mobile robots and vehicles. Results show that it is possible to estimate vehicle parameters with few kinematic assumptions.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1573 ◽  
Author(s):  
Haojie Liu ◽  
Kang Liao ◽  
Chunyu Lin ◽  
Yao Zhao ◽  
Meiqin Liu

LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.


Author(s):  
Zhenpei Yang ◽  
Yuning Chai ◽  
Dragomir Anguelov ◽  
Yin Zhou ◽  
Pei Sun ◽  
...  

2021 ◽  
Author(s):  
Yoichi Shiraishi ◽  
Haohao Zhang ◽  
Kazuhiro Motegi

This chapter describes a part of autonomous driving of work vehicles. This type of autonomous driving consists of work sensing and mobility control. Particularly, this chapter focuses on autonomous work sensing and mobility control of a commercial electric robotic lawn mower, and proposes an AI-based approach for work vehicles such as a robotic lawn mower. These two functions, work sensing and mobililty control, have a close correlation. In terms of efficiency, the traveling speed of a lawn mower, for example, should be reduced when the workload is high, and vice versa. At the same time, it is important to conserve the battery that is used for both work execution and mobility. Based on these requirements, this chapter is focused on developing an estimation system for estimating lawn grass lengths or ground conditions in a robotic lawn mower. To this end, two AI algorithms, namely, random forest (RF) and shallow neural network (SNN), are developed and evaluated on observation data obtained by a fusion of ten types of sensor data. The RF algorithm evaluated on data from the fusion of sensors achieved 92.3% correct estimation ratio in several experiments on real-world lawn grass areas, while the SNN achieved 95.0%. Furthermore, the accuracy of the SNN is 94.0% in experiments where sensor data are continuously obtained while the robotic lawn mower is operating. Presently, the proposed estimation system is being developed by integrating two motor control systems into a robotic lawn mower, one for lawn grass cutting and the other for the robot’s mobility.


Author(s):  
Nicholas Merrill ◽  
Azim Eskandarian

Abstract The traditional approaches to autonomous, vision-based vehicle systems are limited by their dependency on robust algorithms, sensor fusion, detailed scene construction, and high-quality maps. End-to-end models offer a means of circumventing these limitations by directly mapping an image input to a steering angle output for lateral control. Existing end-to-end models, however, either fail to capture temporally dynamic information or rely on computationally expensive Recurrent Neural Networks (RNN), which are prone to error accumulation via feedback. This paper proposes a Multi-Task Learning (MTL) network architecture that leverages available dynamic sensor data as a target for auxiliary tasks. This method improves steering angle prediction by facilitating the extraction of temporal dependencies from sequentially stacked image inputs. Evaluations performed on the publicly available Comma.ai dataset show a 28.6% improvement in steering angle prediction over existing end-to-end methods.


2017 ◽  
Vol 865 ◽  
pp. 429-433
Author(s):  
Sung Bum Park ◽  
Hyeok Chan Kwon ◽  
Dong Hoon Lee

Autonomous cars recognize the surroundings through multiple sensors and make decisions to control the car in order to arrive at destination without driver's interventions. In such environment, if sensor data forgery occurs, it could lead to a big (critical) accident that could threaten the life of the driver. In the paper, a research on a way to get accurate driving information through sensor fusion algorithm that has resilience against data forgery and modulation will be discussed.


2021 ◽  
Vol 17 (5) ◽  
pp. 155014772110183
Author(s):  
Ziyue Li ◽  
Qinghua Zeng ◽  
Yuchao Liu ◽  
Jianye Liu ◽  
Lin Li

Image recognition is susceptible to interference from the external environment. It is challenging to accurately and reliably recognize traffic lights in all-time and all-weather conditions. This article proposed an improved vision-based traffic lights recognition algorithm for autonomous driving, integrating deep learning and multi-sensor data fusion assist (MSDA). We introduce a method to obtain the best size of the region of interest (ROI) dynamically, including four aspects. First, based on multi-sensor data (RTK BDS/GPS, IMU, camera, and LiDAR) acquired in a normal environment, we generated a prior map that contained sufficient traffic lights information. And then, by analyzing the relationship between the error of the sensors and the optimal size of ROI, the adaptively dynamic adjustment (ADA) model was built. Furthermore, according to the multi-sensor data fusion positioning and ADA model, the optimal ROI can be obtained to predict the location of traffic lights. Finally, YOLOv4 is employed to extract and identify the image features. We evaluated our algorithm using a public data set and actual city road test at night. The experimental results demonstrate that the proposed algorithm has a relatively high accuracy rate in complex scenarios and can promote the engineering application of autonomous driving technology.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


Sign in / Sign up

Export Citation Format

Share Document