Panoramic Vision System for Autonomous Driving Vehicle

2014 ◽  
Vol 644-650 ◽  
pp. 497-501
Author(s):  
Yu Bin Zhou

High effective vision system is important for autonomous driving vehicles. A panoramic vision system based on FPGA+DSP with 6-camera for intelligent vehicles is presented in this paper. The system includes digital image acquisition module and high image processing module which work independently to each other. The including two C6416 DSP chips and one high-performance Virtex-4 FPGA to achieve the complex real-time image processing during autonomous driving, such as cylindrical panoramic image rebuilding, lane detection and tracking. The proposed algorithm was also optimized according to the specific characteristics of the hardware for high parallel processing in FPGA and pipelined in DSP.

2011 ◽  
Vol 143-144 ◽  
pp. 737-741 ◽  
Author(s):  
Hai Bo Liu ◽  
Wei Wei Li ◽  
Yu Jie Dong

Vision system is an important part of the whole robot soccer system.In order to win the game, the robot system must be more quick and more accuracy.A color image segmentation method using improved seed-fill algorithm in YUV color space is introduced in this paper. The new method dramatically reduces the work of calculation,and speeds up the image processing. The result of comparing it with the old method based on RGB color space was showed in the paper.The second step of the vision sub system is identification the color block that separated by the first step.A improved seed fill algorithm is used in the paper.The implementation on MiroSot Soccer Robot System shows that the new method is fast and accurate.


Author(s):  
Wael Farag ◽  

In this paper, a real-time road-Object Detection and Tracking (LR_ODT) method for autonomous driving is proposed. The method is based on the fusion of lidar and radar measurement data, where they are installed on the ego car, and a customized Unscented Kalman Filter (UKF) is employed for their data fusion. The merits of both devices are combined using the proposed fusion approach to precisely provide both pose and velocity information for objects moving in roads around the ego car. Unlike other detection and tracking approaches, the balanced treatment of both pose estimation accuracy and its real-time performance is the main contribution in this work. The proposed technique is implemented using the high-performance language C++ and utilizes highly optimized math and optimization libraries for best real-time performance. Simulation studies have been carried out to evaluate the performance of the LR_ODT for tracking bicycles, cars, and pedestrians. Moreover, the performance of the UKF fusion is compared to that of the Extended Kalman Filter fusion (EKF) showing its superiority. The UKF has outperformed the EKF on all test cases and all the state variable levels (-24% average RMSE). The employed fusion technique show how outstanding is the improvement in tracking performance compared to the use of a single device (-29% RMES with lidar and -38% RMSE with radar).


2014 ◽  
Vol 668-669 ◽  
pp. 1098-1101
Author(s):  
Jian Wang ◽  
Zhen Hai Zhang ◽  
Ke Jie Li ◽  
Hai Yan Shao ◽  
Tao Xu ◽  
...  

Catadioptric panoramic vision system has been widely used in many fields, and also plays a very important role in environment perception of unmanned platform especially. However, the resolution of system is not very high, usually less than 5 million pixels at present. Even if the resolution is high, but the unwrapping and rectification of panoramic video is carried out off-line. Further, the system is also applied in stationary state or low stationary moving. This paper proposes an unwrapping and rectification method based on high-resolution catadioptric panoramic vision system used during non-stationary moving. It can segment dynamic circular mark region accurately and get the coordinates of center of circular image real-timely, shorten the time of image processing, meanwhile the coordinates of center and radius of the circular mark region would be obtained, so the image distortion caused by inaccurate center coordinates can be reduced. During image rectification, after achieving radial distortions parameters (K1, K2, K3), decentering distortions parameters (P1, P2), and the correction factor that has no physical meanings, we can used those for fitting the rectification polynomial, so the panoramic video can be rectified without distortion.


Author(s):  
Fuat Cos¸kun ◽  
O¨zgu¨r Tuncer ◽  
Elif Karslıgil ◽  
Levent Gu¨venc¸

Lane keeping assistance systems help the driver in following the lane centerline. While lane keeping assistance systems are available in some mass production vehicles, they have not found widespread use and are not as common as ESP or ACC at the moment. Lane keeping assistance systems still need further development. Previously available systems have to be continuously adapted to newer vehicle models and fully tested after this adaptation. An image processing algorithm for lane detection and tracking, a lane keeping assistance controller design and a real time hardware-in-the-loop (HiL) simulator developed for testing the designed lane keeping assistance system are therefore presented in this paper. The high fidelity, high order, realistic and nonlinear vehicle model in Carmaker HiL runs as software in a real time simulation on a dSpace compact simulator with the DS1005 and DS2210 boards. A PC is used for processing video frames coming from an in-vehicle camera pointed towards the road ahead. Lane detection and tracking computations including fitting of composite Bezier curves to curved lanes are carried out on this PC. In the present setup, the camera used is a virtual camera attached to the virtual vehicle in Carmaker and provides video frames from the Carmaker animation screen. A dSpace microautobox is available for obtaining the lane data from the PC and the Carmaker vehicle data from the dSpace compact simulator and calculating the required steering actions and sending them to the Carmaker vehicle model. The lane keeping controller is designed in the Matlab toolbox COMES using parameter space techniques. The motivation behind this approach is to develop the lane keeping assistance system as much as possible in a laboratory hardware-in-the-loop setting before time consuming, expensive and potentially dangerous road testing. Lane detection, tracking and curved lane fit results, hardware-in-the-loop simulation results of the lane keeping controller with the image processing system are are used to demonstrate the effectiveness of the proposed method.


Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2429
Author(s):  
Bin Zhang

Grayscale morphology is a powerful tool in image, video, and visual applications. A reconfigurable processor is proposed for grayscale image morphological processing. The architecture of the processor is a combination of a reconfigurable grayscale processing module (RGPM) and peripheral circuits. The RGPM, which consists of four grayscale computing units, conducts grayscale morphological operations and implements related algorithms of more than 100 f/s for a 1024 × 1024 image. The periphery circuits control the entire image processing and dynamic reconfiguration process. Synthesis results show that the proposed processor can provide 43.12 GOPS and achieve 8.87 GOPS/mm2 at a 220-MHz system clock. The simulation and experimental results show that the processor is suitable for high-performance embedded systems.


Sign in / Sign up

Export Citation Format

Share Document