vehicle localization
Recently Published Documents


TOTAL DOCUMENTS

539
(FIVE YEARS 174)

H-INDEX

30
(FIVE YEARS 7)

2022 ◽  
Vol 22 (1) ◽  
pp. 1-23
Author(s):  
Nan Jiang ◽  
Debin Huang ◽  
Jing Chen ◽  
Jie Wen ◽  
Heng Zhang ◽  
...  

The precise measuring of vehicle location has been a critical task in enhancing the autonomous driving in terms of intelligent decision making and safe transportation. Internet of Vehicles ( IoV ) is an important infrastructure in support of autonomous driving, allowing real-time road information exchanging and sharing for localizing vehicles. Global positioning System ( GPS ) is widely used in the traditional IoV system. GPS is unable to meet the key application requirements of autonomous driving due to meter level error and signal deterioration. In this article, we propose a novel solution, named Semi-Direct Monocular Visual-Inertial Odometry using Point and Line Features ( SDMPL-VIO ) for precise vehicle localization. Our SDMPL-VIO model takes advantage of a low-cost Inertial Measurement Unit ( IMU ) and monocular camera, using them as the sensor to acquire the surrounding environmental information. Visual-Inertial Odometry ( VIO ), taking into account both point and line features, is proposed, which is able to deal with both weak texture and dynamic environment. We use a semi-direct method to deal with keyframes and non-keyframes, respectively. Dual sliding window mechanisms can effectively fuse point-line and IMU information. To evaluate our SDMPL-VIO system model, we conduct extensive experiments on both an indoor dataset (i.e., EuRoC) and an outdoor dataset (i.e., KITTI) from the real-world applications, respectively. The experimental results show that the accuracy of SDMPL-VIO proposed by us is better than the mainstream VIO system at present. Especially in the weak texture of the datasets, fast-moving datasets, and other challenging datasets, SDMPL-VIO has a relatively high robustness.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3092
Author(s):  
Yonghui Liang ◽  
Yuqing He ◽  
Junkai Yang ◽  
Weiqi Jin ◽  
Mingqi Liu

Accurate localization of surrounding vehicles helps drivers to perceive surrounding environment, which can be obtained by two parameters: depth and direction angle. This research aims to present a new efficient monocular vision based pipeline to get the vehicle’s location. We proposed a plug-and-play convolutional block combination with a basic target detection algorithm to improve the accuracy of vehicle’s bounding boxes. Then they were transformed to actual depth and angle through a conversion method which was deduced by monocular imaging geometry and camera parameters. Experimental results on KITTI dataset showed the high accuracy and efficiency of the proposed method. The mAP increased by about 2% with an additional inference time of less than 5 ms. The average depth error was about 4% for near distance objects and about 7% for far distance objects. The average angle error was about two degrees.


2021 ◽  
Vol 7 (12) ◽  
pp. 270
Author(s):  
Daniel Tøttrup ◽  
Stinus Lykke Skovgaard ◽  
Jonas le Fevre Sejersen ◽  
Rui Pimentel de Figueiredo

In this work we present a novel end-to-end solution for tracking objects (i.e., vessels), using video streams from aerial drones, in dynamic maritime environments. Our method relies on deep features, which are learned using realistic simulation data, for robust object detection, segmentation and tracking. Furthermore, we propose the use of rotated bounding-box representations, which are computed by taking advantage of pixel-level object segmentation, for improved tracking accuracy, by reducing erroneous data associations during tracking, when combined with the appearance-based features. A thorough set of experiments and results obtained in a realistic shipyard simulation environment, demonstrate that our method can accurately, and fast detect and track dynamic objects seen from a top-view.


2021 ◽  
Vol 199 ◽  
pp. 108422
Author(s):  
Hua Qin ◽  
Weihong Chen ◽  
Weimin Chen ◽  
Ni Li ◽  
Min Zeng ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Wei Wang ◽  
Hu Sun ◽  
Yuqiang Jin ◽  
Minglei Fu ◽  
Kun Li ◽  
...  

Author(s):  
Jeong Sik Kim ◽  
Woo Young Choi ◽  
Yong Woo Jeong ◽  
Chung Choo Chung

Author(s):  
Matheus M. Dos Santos ◽  
Giovanni G. De Giacomo ◽  
Paulo L. J. Drews-Jr ◽  
Silvia S. C. Botelho ◽  
Claudio D. Mello

2021 ◽  
pp. 027836492110457
Author(s):  
Tim Y. Tang ◽  
Daniele De Martini ◽  
Shangzhe Wu ◽  
Paul Newman

Traditional approaches to outdoor vehicle localization assume a reliable, prior map is available, typically built using the same sensor suite as the on-board sensors used during localization. This work makes a different assumption. It assumes that an overhead image of the workspace is available and utilizes that as a map for use for range-based sensor localization by a vehicle. Here, range-based sensors are radars and lidars. Our motivation is simple, off-the-shelf, publicly available overhead imagery such as Google satellite images can be a ubiquitous, cheap, and powerful tool for vehicle localization when a usable prior sensor map is unavailable, inconvenient, or expensive. The challenge to be addressed is that overhead images are clearly not directly comparable to data from ground range sensors because of their starkly different modalities. We present a learned metric localization method that not only handles the modality difference, but is also cheap to train, learning in a self-supervised fashion without requiring metrically accurate ground truth. By evaluating across multiple real-world datasets, we demonstrate the robustness and versatility of our method for various sensor configurations in cross-modality localization, achieving localization errors on-par with a prior supervised approach while requiring no pixel-wise aligned ground truth for supervision at training. We pay particular attention to the use of millimeter-wave radar, which, owing to its complex interaction with the scene and its immunity to weather and lighting conditions, makes for a compelling and valuable use case.


Sign in / Sign up

Export Citation Format

Share Document