Deep Traffic Light Detection for Self-driving Cars from a Large-scale Dataset

Author(s):  
Jinkyu Kim ◽  
Hyunggi Cho ◽  
Myung Hwangbo ◽  
Jaehyung Choi ◽  
John Canny ◽  
...  
2020 ◽  
Vol 9 (1) ◽  
pp. 2698-2704

Advanced Driving Assistance System (ADAS) has seen tremendous growth over the past 10 years. In recent times, luxury cars, as well as some newly emerging cars, come with ADAS application. From 2014, Because of the entry of the European new car assessment programme (EuroNCAP) [1] in the AEBS test, it helped gain momentum the introduction of ADAS in Europe [1]. Most OEMs and research institutes have already demonstrated on the self-driving cars [1]. So here, a focus is made on road segmentation where LiDAR sensor takes in the image of the surrounding and where the vehicle should know its path, it is fulfilled by processing a convolutional neural network called semantic segmentation on an FPGA board in 16.9ms [3]. Further, a traffic light detection model is also developed by using NVidia Jetson and 2 FPGA boards, collectively named as 'Driving brain' which acts as a super computer for such networks. The results are obtained at higher accuracy by processing the obtained traffic light images into the CNN classifier [5]. Overall, this paper gives a brief idea of the technical trend of autonomous driving which throws light on algorithms and for advanced driver-assistance systems used for road segmentation and traffic light detection


2017 ◽  
Vol 5 (3) ◽  
pp. 20
Author(s):  
JEBISHA J ◽  
MONISHA V ◽  
JEMI B. FEMILA ◽  
◽  
◽  
...  

Author(s):  
Jin Zhou ◽  
Qing Zhang ◽  
Jian-Hao Fan ◽  
Wei Sun ◽  
Wei-Shi Zheng

AbstractRecent image aesthetic assessment methods have achieved remarkable progress due to the emergence of deep convolutional neural networks (CNNs). However, these methods focus primarily on predicting generally perceived preference of an image, making them usually have limited practicability, since each user may have completely different preferences for the same image. To address this problem, this paper presents a novel approach for predicting personalized image aesthetics that fit an individual user’s personal taste. We achieve this in a coarse to fine manner, by joint regression and learning from pairwise rankings. Specifically, we first collect a small subset of personal images from a user and invite him/her to rank the preference of some randomly sampled image pairs. We then search for the K-nearest neighbors of the personal images within a large-scale dataset labeled with average human aesthetic scores, and use these images as well as the associated scores to train a generic aesthetic assessment model by CNN-based regression. Next, we fine-tune the generic model to accommodate the personal preference by training over the rankings with a pairwise hinge loss. Experiments demonstrate that our method can effectively learn personalized image aesthetic preferences, clearly outperforming state-of-the-art methods. Moreover, we show that the learned personalized image aesthetic benefits a wide variety of applications.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


Sign in / Sign up

Export Citation Format

Share Document