scholarly journals Long-Range, High-Resolution Camera Optical Design for Assisted and Autonomous Driving

Photonics ◽  
2019 ◽  
Vol 6 (2) ◽  
pp. 73 ◽  
Author(s):  
Furkan Sahin

High-quality cameras are fundamental sensors in assisted and autonomous driving. In particular, long-range forward-facing cameras can provide vital information about the road ahead, including detection and recognition of objects and early hazard warning. These automotive cameras should provide high-resolution images consistently under extreme operating conditions of the car for robust operation. This paper aims to introduce the design of fixed-focus, passively athermalized lenses for next-generation automotive cameras. After introducing an overview of essential and desirable features of automotive cameras and state-of-the-art, based on these features, two different camera designs that can achieve traffic sign recognition at 200 m distance are presented. These lenses are designed from scratch, with a unique design approach that starts with a graphical lens material selection tool and arrives at an optimized design with optical design software. Optical system analyses are performed to evaluate the lens designs. The lenses are shown to accomplish high contrast from − 40 °C to 100 °C and allow for a 4 × increase in resolution of automotive cameras.

Recognition and classification of traffic signs and other numerous displays on the road are very crucial for autonomous driving, navigation, and safety systems on roads. Machine learning or deep learning methods are generally employed to develop a traffic sign recognition (TSR) system. This paper proposes a novel two-step TSR approach consisting of contrast limited adaptive histogram equalization (CLAHE)-based image enhancement and convolutional neural network (CNN) as multiclass classifier. Three CNN architectures viz. LeNet, VggNet, and ResNet were employed for classification. All the methods were tested for classification of German traffic sign recognition benchmark (GTSRB) dataset. The experimental results presented in the paper endorse the capability of the proposed work. Based on experimental results, it has also been illustrated that the proposed novel architecture consisting of CLAHE-based image enhancement & ResNet-based classifier has helped to obtain better classification accuracy as compared to other similar approaches.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 686
Author(s):  
Ke Zhou ◽  
Yufei Zhan ◽  
Dongmei Fu

Traffic sign recognition in poor environments has always been a challenge in self-driving. Although a few works have achieved good results in the field of traffic sign recognition, there is currently a lack of traffic sign benchmarks containing many complex factors and a robust network. In this paper, we propose an ice environment traffic sign recognition benchmark (ITSRB) and detection benchmark (ITSDB), marked in the COCO2017 format. The benchmarks include 5806 images with 43,290 traffic sign instances with different climate, light, time, and occlusion conditions. Second, we tested the robustness of the Libra-RCNN and HRNetv2p on the ITSDB compared with Faster-RCNN. The Libra-RCNN performed well and proved that our ITSDB dataset did increase the challenge in this task. Third, we propose an attention network based on high-resolution traffic sign classification (PFANet), and conduct ablation research on the design parallel fusion attention module. Experiments show that our representation reached 93.57% accuracy in ITSRB, and performed as well as the newest and most effective networks in the German traffic sign recognition dataset (GTSRB).


Author(s):  
Zhenhua Zhang ◽  
Leon Stenneth ◽  
Xiyuan Liu

The state-of-the-art traffic sign recognition (TSR) algorithms are designed to recognize the textual information of a traffic sign at over 95% accuracy. Even though, they are still not ready for complex roadworks near ramps. In real-world applications, when the vehicles are running on the freeway, they may misdetect the traffic signs for the ramp, which will become inaccurate feedback to the autonomous driving applications and result in unexpected speed reduction. The misdetection problems have drawn minimal attention in recent TSR studies. In this paper, it is proposed that the existing TSR studies should transform from the point-based sign recognition to path-based sign learning. In the proposed pipeline, the confidence of the TSR observations from normal vehicles can be increased by clustering and location adjustment. A supervised learning model is employed to classify the clustered learned signs and complement their path information. Test drives are conducted in 12 European countries to calibrate the models and validate the path information of the learned sign. After model implementation, the path accuracy over 1,000 learned signs can be increased from 75.04% to 89.80%. This study proves the necessity of the path-based TSR studies near freeway ramps and the proposed pipeline demonstrates a good utility and broad applicability for sensor-based autonomous vehicle applications.


2018 ◽  
Vol 10 (0) ◽  
pp. 1-5
Author(s):  
Ervin Miloš ◽  
Aliaksei Kolesau ◽  
Dmitrij Šešok

Traffic sign recognition is an important method that improves the safety in the roads, and this system is an additional step to autonomous driving. Nowadays, to solve traffic sign recognition problem, convolutional neural networks (CNN) can be adopted for its high performance well proved for computer vision applications. This paper proposes histogram equalization preprocessing (HOG) and CNN with additional operations – batch normalization, dropout and data augmentation. Several CNN architectures are compared to differentiate how each operation affects the accuracy of CNN model. Experimental results describe the effectiveness of using CNN with proposed operations. Santrauka Kelio ženklų atpažinimas – vienas iš svarbių būdų pagerinti saugumą keliuose. Ši sistema laikoma papildomu autonominio vairavimo žingsniu. Šiandien kelio ženklų atpažinimo problemai spręsti taikomi konvoliuciniai neuroniniai tinklai (KNN) dėl jų našumo, įrodyto vaizdų atpažinimo programose. Šiame straipsnyje siūlomas vaizdų histogramos išlyginimo apdorojimo metodas ir KNN su papildomomis operacijomis – paketo normalizavimas ir neuronų išjungimas / įjungimas. Yra palyginamos kelios KNN architektūros siekiant ištirti, kokią įtaką kiekviena operacija daro KNN modelio tikslumui. Eksperimentiniai rezultatai apibūdina KNN naudojimo efektyvumą su pasiūlytomis operacijomis.


2020 ◽  
Vol 48 (4) ◽  
pp. 334-340 ◽  
Author(s):  
András Rövid ◽  
Viktor Remeli ◽  
Norbert Paufler ◽  
Henrietta Lengyel ◽  
Máté Zöldy ◽  
...  

Autonomous driving poses numerous challenging problems, one of which is perceiving and understanding the environment. Since self-driving is safety critical and many actions taken during driving rely on the outcome of various perception algorithms (for instance all traffic participants and infrastructural objects in the vehicle's surroundings must reliably be recognized and localized), thus the perception might be considered as one of the most critical subsystems in an autonomous vehicle. Although the perception itself might further be decomposed into various sub-problems, such as object detection, lane detection, traffic sign detection, environment modeling, etc. In this paper the focus is on fusion models in general (giving support for multisensory data processing) and some related automotive applications such as object detection, traffic sign recognition, end-to-end driving models and an example of taking decisions in multi-criterial traffic situations that are complex for both human drivers and for the self-driving vehicles as well.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 889 ◽  
Author(s):  
Christine Dewi ◽  
Rung-Ching Chen ◽  
Shao-Kuo Tai

Traffic sign recognition (TSR) is a noteworthy issue for real-world applications such as systems for autonomous driving as it has the main role in guiding the driver. This paper focuses on Taiwan’s prohibitory sign due to the lack of a database or research system for Taiwan’s traffic sign recognition. This paper investigates the state-of-the-art of various object detection systems (Yolo V3, Resnet 50, Densenet, and Tiny Yolo V3) combined with spatial pyramid pooling (SPP). We adopt the concept of SPP to improve the backbone network of Yolo V3, Resnet 50, Densenet, and Tiny Yolo V3 for building feature extraction. Furthermore, we use a spatial pyramid pooling to study multi-scale object features thoroughly. The observation and evaluation of certain models include vital metrics measurements, such as the mean average precision (mAP), workspace size, detection time, intersection over union (IoU), and the number of billion floating-point operations (BFLOPS). Our findings show that Yolo V3 SPP strikes the best total BFLOPS (65.69), and mAP (98.88%). Besides, the highest average accuracy is Yolo V3 SPP at 99%, followed by Densenet SPP at 87%, Resnet 50 SPP at 70%, and Tiny Yolo V3 SPP at 50%. Hence, SPP can improve the performance of all models in the experiment.


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Sheila Esmeralda Gonzalez-Reyna ◽  
Juan Gabriel Avina-Cervantes ◽  
Sergio Eduardo Ledesma-Orozco ◽  
Ivan Cruz-Aceves

Traffic sign detection and recognition systems include a variety of applications like autonomous driving, road sign inventory, and driver support systems. Machine learning algorithms provide useful tools for traffic sign identification tasks. However, classification algorithms depend on the preprocessing stage to obtain high accuracy rates. This paper proposes a road sign characterization method based on oriented gradient maps and the Karhunen-Loeve transform in order to improve classification performance. Dimensionality reduction may be important for portable applications on resource constrained devices like FPGAs; therefore, our approach focuses on achieving a good classification accuracy by using a reduced amount of attributes compared to some state-of-the-art methods. The proposed method was tested using German Traffic Sign Recognition Benchmark, reaching a dimensionality reduction of 99.3% and a classification accuracy of 95.9% with a Multi-Layer Perceptron.


Sign in / Sign up

Export Citation Format

Share Document