lane recognition
Recently Published Documents


TOTAL DOCUMENTS

79
(FIVE YEARS 19)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Vol 11 (22) ◽  
pp. 10783
Author(s):  
Felipe Franco ◽  
Max Mauro Dias Santos ◽  
Rui Tadashi Yoshino ◽  
Leopoldo Rideki Yoshioka ◽  
João Francisco Justo

One of the main actions of the driver is to keep the vehicle in a road lane within its markings, which could be aided with modern driver-assistance systems. Forward digital cameras in vehicles allow deploying computer vision strategies to extract the road recognition characteristics in real-time to support several features, such as lane departure warning, lane-keeping assist, and traffic recognition signals. Therefore, the road lane marking needs to be recognized through computer vision strategies providing the functionalities to decide on the vehicle’s drivability. This investigation presents a modular architecture to support algorithms and strategies for lane recognition, with three principal layers defined as pre-processing, processing, and post-processing. The lane-marking recognition is performed through statistical methods, such as buffering and RANSAC (RANdom SAmple Consensus), which selects only objects of interest to detect and recognize the lane markings. This methodology could be extended and deployed to detect and recognize any other road objects.


Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2102
Author(s):  
Heuijee Yun ◽  
Daejin Park

Computer simulation based on digital twin is an essential process when designing self-driving cars. However, designing a simulation program that is exactly equivalent to real phenomena can be arduous and cost-ineffective because too many things must be implemented. In this paper, we propose the method using the online game GTA5 (Grand Theft Auto5), as a groundwork for autonomous vehicle simulation. As GTA5 has a variety of well-implemented objects, people, and roads, it can be considered a suitable tool for simulation. By using OpenCV (Open source computer vision) to capture the GTA5 game screen and analyzing images with YOLO (You Only Look Once) and TensorFlow based on Python, we can build a quite accurate object recognition system. This can lead to writing of algorithms for object avoidance and lane recognition. Once these algorithms have been completed, vehicles in GTA5 can be controlled through codes composed of the basic functions of autonomous driving, such as collision avoidance and lane-departure prevention. In addition, the algorithm tested with GTA5 has been implemented with a programmable RC car (Radio control car), DonkeyCar, to increase reliability. By testing those algorithms, we can ensure that the algorithms can be conducted in real time and they cost low power and low memory size. Therefore, we have found a way to approach digital twin technology one step more easily.


2021 ◽  
Vol 11 (13) ◽  
pp. 6229
Author(s):  
Jong-Ho Han ◽  
Hyun-Woo Kim

This paper proposes a lane detection algorithm using a laser range finder (LRF) for the autonomous navigation of a mobile robot. There are many technologies for ensuring the safety of vehicles, such as airbags, ABS, and EPS. Further, lane detection is a fundamental requirement for an automobile system that utilizes the external environment information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. In the case of a vision-based system, the recognition of the environment of a three-dimensional space becomes excellent only in good conditions for capturing images. However, there are so many unexpected barriers, such as bad illumination, occlusions, vibrations, and thick fog, that the vision-based method cannot be used for satisfying the abovementioned fundamental requirement. In this paper, a three-dimensional lane detection algorithm using LRF that is very robust against illumination is proposed. For the three-dimensional lane detection, the laser reflection difference between the asphalt and the lane according to color and distance has been utilized with the extraction of feature points. Further, a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been experimentally verified.


Author(s):  
Shuang Song ◽  
Wei Chen ◽  
Qianjie Liu ◽  
Huosheng Hu ◽  
Tengchao Huang ◽  
...  

Lane detection algorithms play a key role in Advanced Driver Assistance Systems (ADAS), which are however unable to achieve accurate lane recognition in low-light environments. This paper presents a novel deep network structure, namely LLSS-Net (low-light images semantic segmentation), to achieve accurate lane detection in low-light environments. The method integrates a convolutional neural network for low-light image enhancement and a semantic segmentation network for lane detection. The image quality is firstly improved by a low-light image enhancement network and lane features are then extracted using semantic segmentation. Fast lane clustering is finally performed by using the KD tree models. Cityscapes and Tusimple datasets are utilized to demonstrate the robustness of the proposed method. The experimental results show that the proposed method has an excellent performance for lane detection in low-light roads.


2021 ◽  
Vol 309 ◽  
pp. 01117
Author(s):  
A. Sai Hanuman ◽  
G. Prasanna Kumar

Studies on lane detection Lane identification methods, integration, and evaluation strategies square measure all examined. The system integration approaches for building a lot of strong detection systems are then evaluated and analyzed, taking into account the inherent limits of camera-based lane detecting systems. Present deep learning approaches to lane detection are inherently CNN's semantic segmentation network the results of the segmentation of the roadways and the segmentation of the lane markers are fused using a fusion method. By manipulating a huge number of frames from a continuous driving environment, we examine lane detection, and we propose a hybrid deep architecture that combines the convolution neural network (CNN) and the continuous neural network (CNN) (RNN). Because of the extensive information background and the high cost of camera equipment, a substantial number of existing results concentrate on vision-based lane recognition systems. Extensive tests on two large-scale datasets show that the planned technique outperforms rivals' lane detection strategies, particularly in challenging settings. A CNN block in particular isolates information from each frame before sending the CNN choices of several continuous frames with time-series qualities to the RNN block for feature learning and lane prediction.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 42192-42205
Author(s):  
Youn Joo Lee ◽  
Jae Kyu Suhr ◽  
Ho Gi Jung

Author(s):  
Wei Wang ◽  
Hui Lin ◽  
Junshu Wang

Abstract At present, the number of vehicle owners is increasing, and the cars with autonomous driving functions have attracted more and more attention. The lane detection combined with cloud computing can effectively solve the drawbacks of traditional lane detection relying on feature extraction and high definition, but it also faces the problem of excessive calculation. At the same time, cloud data processing combined with edge computing can effectively reduce the computing load of the central nodes. The traditional lane detection method is improved, and the current popular convolutional neural network (CNN) is used to build a dual model based on instance segmentation. In the image acquisition and processing processes, the distributed computing architecture provided by edge-cloud computing is used to improve data processing efficiency. The lane fitting process generates a variable matrix to achieve effective detection in the scenario of slope change, which improves the real-time performance of lane detection. The method proposed in this paper has achieved good recognition results for lanes in different scenarios, and the lane recognition efficiency is much better than other lane recognition models.


Sign in / Sign up

Export Citation Format

Share Document