scholarly journals Super-Resolved Recognition of License Plate Characters

Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2494
Author(s):  
Sung-Jin Lee ◽  
Seok Bong Yoo

Object detection and recognition are crucial in the field of computer vision and are an active area of research. However, in actual object recognition processes, recognition accuracy is often degraded due to resolution mismatches between training and test image data. To solve this problem, we designed and developed an integrated object recognition and super-resolution framework by proposing an image super-resolution technique that improves object recognition accuracy. In detail, we collected a number of license plate training images through web-crawling and artificial data generation, and the image super-resolution artificial neural network was trained by defining an objective function to be robust to image flips. To verify the performance of the proposed algorithm, we experimented with the trained image super-resolution and recognition on representative test images and confirmed that the proposed super-resolution technique improves the accuracy of character recognition. For character recognition with the 4× magnification, the proposed method remarkably increased the mean average precision by 49.94% compared to the existing state-of-the-art method.

2020 ◽  
Vol 8 (4) ◽  
pp. 304-310
Author(s):  
Windra Swastika ◽  
Ekky Rino Fajar Sakti ◽  
Mochamad Subianto

Low-resolution images can be reconstructed into high-resolution images using the Super-resolution Convolution Neural Network (SRCNN) algorithm. This study aims to improve the vehicle license plate number's recognition accuracy by generating a high-resolution vehicle image using the SRCNN. The recognition is carried out by two types of character recognition methods: Tesseract OCR and SPNet. The training data for SRCNN uses the DIV2K dataset consisting of 900 images, while the training data for character recognition uses the Chars74 dataset. The high-resolution images constructed using SRCNN can increase the average accuracy of vehicle license plate number recognition by 16.9 % using Tesseract and 13.8 % with SPNet.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 13429-13438 ◽  
Author(s):  
Xiaomin Yang ◽  
Wei Wu ◽  
Kai Liu ◽  
Pyoung Won Kim ◽  
Arun Kumar Sangaiah ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2914
Author(s):  
Hubert Michalak ◽  
Krzysztof Okarma

Image binarization is one of the key operations decreasing the amount of information used in further analysis of image data, significantly influencing the final results. Although in some applications, where well illuminated images may be easily captured, ensuring a high contrast, even a simple global thresholding may be sufficient, there are some more challenging solutions, e.g., based on the analysis of natural images or assuming the presence of some quality degradations, such as in historical document images. Considering the variety of image binarization methods, as well as their different applications and types of images, one cannot expect a single universal thresholding method that would be the best solution for all images. Nevertheless, since one of the most common operations preceded by the binarization is the Optical Character Recognition (OCR), which may also be applied for non-uniformly illuminated images captured by camera sensors mounted in mobile phones, the development of even better binarization methods in view of the maximization of the OCR accuracy is still expected. Therefore, in this paper, the idea of the use of robust combined measures is presented, making it possible to bring together the advantages of various methods, including some recently proposed approaches based on entropy filtering and a multi-layered stack of regions. The experimental results, obtained for a dataset of 176 non-uniformly illuminated document images, referred to as the WEZUT OCR Dataset, confirm the validity and usefulness of the proposed approach, leading to a significant increase of the recognition accuracy.


2011 ◽  
Vol 219-220 ◽  
pp. 1411-1414
Author(s):  
En Wei Zheng ◽  
Xian Jun Wang

In this paper, we propose a new super resolution (SR) reconstruction method to handle license plate numbers of vehicles in real traffic videos. Recently, SR reconstruction shemes based on regularization have been demonstrated to be effective because SR reconstrction is an ill-posed problem. Working within this promising framework, the residual data (RD) term can be weighted according to the differences among the observed LR images in the SR reconstruction model. Moreover, L1 norm is used to measure the RD term in order to improve the robustness of our method. Experiments show the proposed method improves the subjective visual quality of the high resolution images.


2021 ◽  
Vol 11 (14) ◽  
pp. 6292
Author(s):  
Tae-Gu Kim ◽  
Byoung-Ju Yun ◽  
Tae-Hun Kim ◽  
Jae-Young Lee ◽  
Kil-Houm Park ◽  
...  

In this study, we have proposed an algorithm that solves the problems which occur during the recognition of a vehicle license plate through closed-circuit television (CCTV) by using a deep learning model trained with a general database. The deep learning model which is commonly used suffers with a disadvantage of low recognition rate in the tilted and low-resolution images, as it is trained with images acquired from the front of the license plate. Furthermore, the vehicle images acquired by using CCTV have issues such as limitation of resolution and perspective distortion. Such factors make it difficult to apply the commonly used deep learning model. To improve the recognition rate, an algorithm which is a combination of the super-resolution generative adversarial network (SRGAN) model, and the perspective distortion correction algorithm is proposed in this paper. The accuracy of the proposed algorithm was verified with a character recognition algorithm YOLO v2, and the recognition rate of the vehicle license plate image was improved 8.8% from the original images.


2021 ◽  
Vol 11 (3) ◽  
pp. 1092
Author(s):  
Seonjae Kim ◽  
Dongsan Jun ◽  
Byung-Gyu Kim ◽  
Hunjoo Lee ◽  
Eunjun Rhee

There are many studies that seek to enhance a low resolution image to a high resolution image in the area of super-resolution. As deep learning technologies have recently shown impressive results on the image interpolation and restoration field, recent studies are focusing on convolutional neural network (CNN)-based super-resolution schemes to surpass the conventional pixel-wise interpolation methods. In this paper, we propose two lightweight neural networks with a hybrid residual and dense connection structure to improve the super-resolution performance. In order to design the proposed networks, we extracted training images from the DIVerse 2K (DIV2K) image dataset and investigated the trade-off between the quality enhancement performance and network complexity under the proposed methods. The experimental results show that the proposed methods can significantly reduce both the inference speed and the memory required to store parameters and intermediate feature maps, while maintaining similar image quality compared to the previous methods.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3351
Author(s):  
Yooho Lee ◽  
Dongsan Jun ◽  
Byung-Gyu Kim ◽  
Hunjoo Lee

Super resolution (SR) enables to generate a high-resolution (HR) image from one or more low-resolution (LR) images. Since a variety of CNN models have been recently studied in the areas of computer vision, these approaches have been combined with SR in order to provide higher image restoration. In this paper, we propose a lightweight CNN-based SR method, named multi-scale channel dense network (MCDN). In order to design the proposed network, we extracted the training images from the DIVerse 2K (DIV2K) dataset and investigated the trade-off between the SR accuracy and the network complexity. The experimental results show that the proposed method can significantly reduce the network complexity, such as the number of network parameters and total memory capacity, while maintaining slightly better or similar perceptual quality compared to the previous methods.


Sign in / Sign up

Export Citation Format

Share Document