scholarly journals An Intelligent Networked Car-Hailing System Based on the Multi Sensor Fusion and UWB Positioning Technology under Complex Scenes Condition

2021 ◽  
Vol 12 (3) ◽  
pp. 135
Author(s):  
Zhi Wang ◽  
Liguo Zang ◽  
Yiming Tang ◽  
Yehui Shen ◽  
Zhenxuan Wu

In order to solve the problems of difficulty and long times to pick up cars in complex traffic scenes, this paper proposes an intelligent networked car-hailing system in complex scenes based on multi sensor fusion and Ultra-Wide-Band (UWB) technology. UWB positioning technology is adopted in the system, and the positioning data is optimized by the untraceable Kalman filter algorithm. Based on the environment perception technology of multi sensor fusion, such as machine vision and laser radar technology, an anti-collision warning algorithm was proposed in the process of car-hailing, which improved the safety factor of car-hailing. When the owner enters the parking lot, the intelligent vehicle can automatically locate the owner’s position and drive to the owner without human intervention, which provides a new idea for the development of intelligent networked vehicles and effectively improves the navigation accuracy and intelligence of intelligent vehicles.

2014 ◽  
Vol 607 ◽  
pp. 791-794 ◽  
Author(s):  
Wei Kang Tey ◽  
Che Fai Yeong ◽  
Yip Loon Seow ◽  
Eileen Lee Ming Su ◽  
Swee Ho Tang

Omnidirectional mobile robot has gained popularity among researchers. However, omnidirectional mobile robot is rarely been applied in industry field especially in the factory which is relatively more dynamic than normal research setting condition. Hence, it is very important to have a stable yet reliable feedback system to allow a more efficient and better performance controller on the robot. In order to ensure the reliability of the robot, many of the researchers use high cost solution in the feedback of the robot. For example, there are researchers use global camera as feedback. This solution has increases the cost of the robot setup fee to a relatively high amount. The setup system is also hard to modify and lack of flexibility. In this paper, a novel sensor fusion technique is proposed and the result is discussed.


Author(s):  
Antje Westenberger ◽  
Steffen Waldele ◽  
Balaganesh Dora ◽  
Bharanidhar Duraisamy ◽  
Marc Muntzinger ◽  
...  

Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 56 ◽  
Author(s):  
Bin Chen ◽  
Xiaofei Pei ◽  
Zhenfu Chen

Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor fusion technology in intelligent vehicle target detection is studied with a millimeter-wave (MMW) radar and a camera. The target level fusion hierarchy is adopted, and the fusion algorithm is divided into two tracking processing modules and one fusion center module based on the distributed structure. The measurement information output by two sensors enters the tracking processing module, and after processing by a multi-target tracking algorithm, the local tracks are generated and transmitted to the fusion center module. In the fusion center module, a two-level association structure is designed based on regional collision association and weighted track association. The association between two sensors’ local tracks is completed, and a non-reset federated filter is used to estimate the state of the fusion tracks. The experimental results indicate that the proposed algorithm can complete a tracks association between the MMW radar and camera, and the fusion track state estimation method has an excellent performance.


2013 ◽  
Vol 401-403 ◽  
pp. 1368-1372
Author(s):  
Jian Hua Wang ◽  
Yun Cheng Wang ◽  
Fei Xie ◽  
Hai Feng Ding

The Vehicle Active Anti-collision Warning system often adopts the millimeter-wave radar with linear frequency-modulated continuous (FMCW) system as its signal acquisition and processing device. Affected by the road environment and their own devices, intermediate frequency (IF) signal inevitably exist in a variety of interference and noise. This paper proposed an iterative Kalman filter algorithm to remove the noise from IF signal. And used of MATLAB to do the simulation analysis. The analysis results show that the improved algorithm enhances the precision of de-noising effectively and meet the requirements of real-time and accuracy.


Author(s):  
Barnaba Ubezio ◽  
Shashank Sharma ◽  
Guglielmo Van der Meer ◽  
Michele Taragna

Abstract End-effector tracking for a mobile manipulator is achieved through Sensor Fusion techniques, implemented with a particular visual-inertial sensor suite and an Extended Kalman Filter algorithm. The suite is composed of an Optitrack motion capture system and a Honeywell HG4930 MEMS IMU, for which a further analysis on the mathematical noise model is reported. The filter is constructed in such a way that its complexity remains constant and independent of the visual algorithm, with the possibility of inserting additional sensors, to further improve the estimation accuracy. Experiments in real-time have been performed with the 12-DOF KUKA VALERI robot, extracting the position and the orientation of the end-effector and comparing their estimates with pure sensor measurements. Along with the physical results, issues related to calibration, working frequency and physical mounting are described.


Author(s):  
Gang Huang ◽  
Zhaozheng Hu ◽  
Mengchao Mu ◽  
Xianglong Wang ◽  
Fan Zhang

Because of limited access to global positioning system (GPS) signals, accurate and reliable localization for intelligent vehicles in underground parking lots is still an open problem. This paper proposes a multi-view and multi-scale localization method aiming at solving this problem. The proposed method is divided into an offline mapping stage and an online localization stage. In the mapping stage, the offline map is generated by fusing 3-D information, WiFi features, visual features, and trajectory from visual odometry (VO). In the localization stage, WiFi fingerprint matching is exploited for coarse localization. Based on the result of coarse localization, multi-view localization is exploited for image-level localization. Finally, metric localization is exploited to refine the localization results. By applying this multi-scale strategy, it is possible to fuse WiFi localization and visual localization and reduce the image matching and error rate to a great extent. Because of exploiting more information, multi-view localization is more robust and accurate than single-view localization. The method is tested in a 2,000 m2 underground parking lot. The result demonstrates that this method can achieve sub-meter localization on average. The proposed localization method can be a supplement to the existing intelligent vehicle localization techniques.


2020 ◽  
Vol 4 (4) ◽  
pp. 231
Author(s):  
Agus Mulyanto ◽  
Rohmat Indra Borman ◽  
Purwono Prasetyawan ◽  
A Sumarudin

The advanced driver assistance systems (ADAS) are one of the issues to protecting people from vehicle collision. Collision warning system is a very important part of ADAS to protect people from the dangers of accidents caused by fatigue, drowsiness and other human errors. Multi-sensors has been widely used in ADAS for environment perception such as cameras, radar, and light detection and ranging (LiDAR). We propose the relative orientation and translation between the two sensors are things that must be considered in performing fusion. we discuss the real-time collision warning system using 2D LiDAR and Camera sensors for environment perception and estimate the distance (depth) and angle of obstacles. In this paper, we propose a fusion of two sensors that is camera and 2D LiDAR to get the distance and angle of an obstacle in front of the vehicle that implemented on Nvidia Jetson Nano using Robot Operating System (ROS). Hence, a calibration process between the camera and 2D LiDAR is required which will be presented in session III. After that, the integration and testing will be carried out using static and dynamic scenarios in the relevant environment. For fusion, we use the implementation of the conversion from degree to coordinate. Based on the experiment, we result obtained an average of 0.197 meters


Sign in / Sign up

Export Citation Format

Share Document