Enhancement and license plate detection in nighttime scenes using LDR images fusion from a single input image

Author(s):  
Soo-Chang Pei ◽  
Chi-Yin Wang
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1074
Author(s):  
Song-Lu Chen ◽  
Qi Liu ◽  
Jia-Wei Ma ◽  
Chun Yang

As the license plate is multiscale and multidirectional in the natural scene image, its detection is challenging in many applications. In this work, a novel network that combines indirect and direct branches is proposed for license plate detection in the wild. The indirect detection branch performs small-sized vehicle plate detection with high precision in a coarse-to-fine scheme using vehicle–plate relationships. The direct detection branch detects the license plate directly in the input image, reducing false negatives in the indirect detection branch due to the miss of vehicles’ detection. We propose a universal multidirectional license plate refinement method by localizing the four corners of the license plate. Finally, we construct an end-to-end trainable network for license plate detection by combining these two branches via post-processing operations. The network can effectively detect the small-sized license plate and localize the multidirectional license plate in real applications. To our knowledge, the proposed method is the first one that combines indirect and direct methods into an end-to-end network for license plate detection. Extensive experiments verify that our method outperforms the indirect methods and direct methods significantly.


2021 ◽  
Vol 5 (4) ◽  
pp. 23-36
Author(s):  
J.Andrew Onesimu ◽  
Robin D.Sebastian ◽  
Yuichi Sei ◽  
Lenny Christopher

One of the largest automotive sectors in the world is India. The number of vehicles traveling by road has increased in recent times. In malls or other crowded places, many vehicles enter and exit the parking area. Due to the increase in vehicles, it is difficult to manually note down the license plate number of all the vehicles passing in and out of the parking area. Hence, it is necessary to develop an Automatic License Plate Detection and Recognition (ALPDR) model that recognize the license plate number of vehicles automatically. To automate this process, we propose a three-step process that will detect the license plate, segment the characters and recognize the characters present in it. Detection is done by converting the input image to a bi-level image. Using region props the characters are segmented from the detected license plate. A two-layer CNN model is developed to recognize the segmented characters. The proposed model automatically updates the details of the car entering and exiting the parking area to the database. The proposed ALPDR model has been tested in several conditions such as blurred images, different distances from the cameras, day and night conditions on the stationary vehicles. Experimental result shows that the proposed system achieves 91.1%, 96.7%, and 98.8% accuracy on license plate detection, segmentation, and recognition respectively which is superior to state-of-the-art literature models.


Author(s):  
Yongjie Zou ◽  
Yongjun Zhang ◽  
Jun Yan ◽  
Xiaoxu Jiang ◽  
Tengjie Huang ◽  
...  

Symmetry ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 38
Author(s):  
Dong Zhao ◽  
Baoqing Ding ◽  
Yulin Wu ◽  
Lei Chen ◽  
Hongchao Zhou

This paper proposes a method for discovering the primary objects in single images by learning from videos in a purely unsupervised manner—the learning process is based on videos, but the generated network is able to discover objects from a single input image. The rough idea is that an image typically consists of multiple object instances (like the foreground and background) that have spatial transformations across video frames and they can be sparsely represented. By exploring the sparsity representation of a video with a neural network, one may learn the features of each object instance without any labels, which can be used to discover, recognize, or distinguish object instances from a single image. In this paper, we consider a relatively simple scenario, where each image roughly consists of a foreground and a background. Our proposed method is based on encoder-decoder structures to sparsely represent the foreground, background, and segmentation mask, which further reconstruct the original images. We apply the feed-forward network trained from videos for object discovery in single images, which is different from the previous co-segmentation methods that require videos or collections of images as the input for inference. The experimental results on various object segmentation benchmarks demonstrate that the proposed method extracts primary objects accurately and robustly, which suggests that unsupervised image learning tasks can benefit from the sparsity of images and the inter-frame structure of videos.


Sign in / Sign up

Export Citation Format

Share Document