scholarly journals 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection

2017 ◽  
Vol 68 ◽  
pp. 14-27 ◽  
Author(s):  
Christian Häne ◽  
Lionel Heng ◽  
Gim Hee Lee ◽  
Friedrich Fraundorfer ◽  
Paul Furgale ◽  
...  
Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2385 ◽  
Author(s):  
George Dimas ◽  
Dimitris E. Diamantis ◽  
Panagiotis Kalozoumis ◽  
Dimitris K. Iakovidis

Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor navigation that can be evolved into an everyday visual aid for VCP. The proposed methodology is integrated in a wearable visual perception system (VPS). The proposed approach efficiently incorporates deep learning, object recognition models, along with an obstacle detection methodology based on human eye fixation prediction using Generative Adversarial Networks. An uncertainty-aware modeling of the obstacle risk assessment and spatial localization has been employed, following a fuzzy logic approach, for robust obstacle detection. The above combination can translate the position and the type of detected obstacles into descriptive linguistic expressions, allowing the users to easily understand their location in the environment and avoid them. The performance and capabilities of the proposed method are investigated in the context of safe navigation of VCP in outdoor environments of cultural interest through obstacle recognition and detection. Additionally, a comparison between the proposed system and relevant state-of-the-art systems for the safe navigation of VCP, focused on design and user-requirements satisfaction, is performed.


Author(s):  
Rashmi Jain ◽  
Prachi Tamgade ◽  
R. Swaroopa ◽  
Pranoti Bhure ◽  
Srushti Shahu ◽  
...  

Perceiving the surroundings accurately and quickly is one of the most essential and challenging tasks for systems such as self-driving cars. view to the car making it more informed about the environment than a human driver. To build a fully virtual self-driving car, we have to build two things, Self-driving car software and virtual Self-driving car. Self-driving software can do two things one is based on video input of the road, the software can determine how to safely and effectively steer the car another is based on video input of the road, the software can determine how to safely and effectively use the car’s acceleration and braking mechanisms.


2021 ◽  
Author(s):  
Niclas Zeller

This thesis presents the development of image processing algorithms based on a Microsoft Kinect camera system. The algorithms developed during this thesis are applied on the depth image received from Kinect and are supposed to model a three dimensional object based representation of the recorded scene. The motivation behind this thesis is to develop a system which assists visually impaired people by navigating through unknown environments. The developed system is able to detect obstacles in the recorded scene and to warn about these obstacles. Since the goal of this thesis was not to develop a complete real time system but to invent reliable algorithms solving this task, the algorithms were developed in MATLAB. Additionally a control software was developed by which depth as well as color images can be received from Kinect. The developed algorithms are a combination of already known plane fitting algorithms and novel approaches. The algorithms perform a plane segmentation of the 3D point cloud and model objects out of the received segments. Each obstacle is defined by a cuboid box and thus can be illustrated easily to the blind person. For plane segmentation different approaches were compared to each other to find the most suitable approach. The first algorithm analyzed in this thesis is a normal vector based plane fitting algorithm. This algorithm supplies very accurate results but also has a high computation effort. The second approach, which was finally implemented, is a gradient based 2D image segmentation combined with a RANSAC plane segmentation (6) in a 3D points cloud. This approach has the advantage to find very small edges within the scene but also builds planes based on global constrains. Beside the development of the algorithm results of the image processing, which are really promising, are presented. Thus the algorithm is worth to be improved by further development. The developed algorithm is able to detect very small but significant obstacles but on the other hand does not represent the scene too detailed such that the result can be illustrated accurately to a blind person.


Author(s):  
Xuan Shao ◽  
Xiao Liu ◽  
Lin Zhang ◽  
Shengjie Zhao ◽  
Ying Shen ◽  
...  

2011 ◽  
Vol 230-232 ◽  
pp. 1190-1194 ◽  
Author(s):  
Min Kang ◽  
Hou Shang Li ◽  
Xiu Qing Fu

In order to measure the initial gap between the workpiece and tool-cathode in electrochemical machining, the measurement method based on machine vision was studied in this paper. First, the measurement system based on machine vision was established. The hardware of the system consisted of CCD camera, image data acquisition card, light source and computer. The software of the system was developed by VC++6.0. Then, the original digital image of electrochemical machining initial gap collected by the CCD camera system was changed into the contour of image through graying, bivalency, edge detection and segmentation. Through system calibration, the physical size of the gap was calculated. Finally, relative experiments were carried out. The experimental results validated the feasibility of the method which measures the electrochemical machining initial gap based on machine vision.


Author(s):  
V. V. Kniaz ◽  
V. V. Fedorenko

The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.


Sign in / Sign up

Export Citation Format

Share Document