Indonesian Plate Number Identification Using YOLACT and Mobilenetv2 in the Parking Management System

2021 ◽  
Vol 9 (1) ◽  
pp. 69
Author(s):  
I Kadek Gunawan ◽  
I Putu Agung Bayupati ◽  
Kadek Suar Wibawa ◽  
I Made Sukarsa ◽  
Laurensius Adi Kurniawan

A vehicle registration plate is used for vehicle identity. In recent years, technology to identify plate numbers automatically or known as Automatic License Plate Recognition (ALPR) has grown over time. Convolutional Neural Network and   YOLACT are used to do plate number recognition from a video. The number plate recognition process consists of 3 stages. The first stage determines the coordinates of the number plate area on a video frame using YOLACT. The second stage is to separate each character inside the plat number using morphological operations, horizontal projection, and topological structural. The third stage is recognizing each character candidate using CNN MobileNetV2. To reduce computation time by only take several frames in the video, frame sampling is performed. This experiment study uses frame sampling, YOLACT epoch, MobileNet V2 epoch, and the ratio of validation data as parameters. The best results are with 250ms frame sampling succeed to reduce computational times up to 78%, whereas the accuracy is affected by the MobileNetV2 model with 100 epoch and ratio of split data validation 0,1 which results in 83,33% in average accuracy. Frame sampling can reduce computational time however higher frame sampling value causes the system fails to obtain plate region area.

Author(s):  
Abd Gani S. F. ◽  
◽  
Miskon M. F ◽  
Hamzah R. A ◽  
Mohamood N ◽  
...  

Automatic Number Plate Recognition (ANPR) combines electronic hardware and complex computer vision software algorithms to recognize the characters on vehicle license plate numbers. Many researchers have proposed and implemented ANPR for various applications such as law enforcement and security, access control, border access, tracking stolen vehicles, tracking traffic violations, and parking management system. This paper discusses a live-video ANPR system using CNN developed on an Android smartphone embedded with a camera with limited resolution and limited processing power based on Malaysian license plate standards. In terms of system performance, in an ideal outdoor environment with good lighting and direct or slightly skewed camera angle, the recognition works perfectly with a computational time of 0.635 seconds. However, this performance is affected by poor lighting, extremely skewed angle of license plates, and fast vehicle movement.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 1008
Author(s):  
M Venkata Srinu ◽  
Venkateswara Rao Morla ◽  
Kali Vara Prasad Baditha ◽  
Vara Kumari. S ◽  
Srinivas Maddimsetti

License plate recognition is an essential task in applications like urban vehicle management, intelligent transportation system, traffic surveillance and parking management system. In this work, we acquire the images using the mobile app and recognize license plate details with the help of our proposed image processing model. The recognized license plate details have been displayed on a customized website. The proposed image processing model does the adaptive thresholding on the images with resolution of 1280× 960. The connected component analysis using bounding boxes will be performed on threshold image. The desired plate details are highlighted by creating a region of interest for maximum magnitude row of an image based on the pixel value, then the statistical and logical operations are used to extract the candidate region. After obtaining candidate region, recognition of license plate number has been done using template matching. The complete customer details, displayed on the customized website which is connected to the database with the help of plate number. The computation time of proposed method is less than the existed methods. 


Author(s):  
Asha Singh ◽  
Prasanth Vaidya

<p>By using image processing, the Automated parking management system (APMS) to recognize the license plate number for efficient management of vehicle parking and vehicle billing. It is an independent real-time system, reduces number of people involvement in parking areas. The main aim of this system is to automated payment collection. This (APMS) system extract and recognize license plate numbers from the vehicles, then that image is being processed and used to generate an electronic bill. Generally in the parking lots heavy labor work is needed. This system used to decrease the cost of the labor and also enhance the performance of the APMS. This system is composed of vehicles license plate number extraction, character segmentation and character recognition. A proper pre-processing is done before extracting the license plate and it also generates the entry time and exit time of the vehicle and finally generates the electronic bill.</p>


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Secure and efficient authentication mechanism becomes a major concern in cloud computing due to the data sharing among cloud server and user through internet. This paper proposed an efficient Hashing, Encryption and Chebyshev HEC-based authentication in order to provide security among data communication. With the formal and the informal security analysis, it has been demonstrated that the proposed HEC-based authentication approach provides data security more efficiently in cloud. The proposed approach amplifies the security issues and ensures the privacy and data security to the cloud user. Moreover, the proposed HEC-based authentication approach makes the system more robust and secured and has been verified with multiple scenarios. However, the proposed authentication approach requires less computational time and memory than the existing authentication techniques. The performance revealed by the proposed HEC-based authentication approach is measured in terms of computation time and memory as 26ms, and 1878bytes for 100Kb data size, respectively.


Author(s):  
Srinivasan A ◽  
Sudha S

One of the main causes of blindness is diabetic retinopathy (DR) and it may affect people of any ages. In these days, both young and old ages are affected by diabetes, and the di abetes is the main cause of DR. Hence, it is necessary to have an automated system with good accuracy and less computation time to diagnose and treat DR, and the automated system can simplify the work of ophthalmologists. The objective is to present an overview of various works recently in detecting and segmenting the various lesions of DR. Papers were categorized based on the diagnosing tools and the methods used for detecting early and advanced stage lesions. The early lesions of DR are microaneurysms, hemorrhages, exudates, and cotton wool spots and in the advanced stage, new and fragile blood vessels can be grown. Results have been evaluated in terms of sensitivity, specificity, accuracy and receiver operating characteristic curve. This paper analyzed the various steps and different algorithms used recently for the detection and classification of DR lesions. A comparison of performances has been made in terms of sensitivity, specificity, area under the curve, and accuracy. Suggestions, future workand the area to be improved were also discussed.Keywords: Diabetic retinopathy, Image processing, Morphological operations, Neural network, Fuzzy logic. 


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. V1-V9 ◽  
Author(s):  
Zhonghuan Chen ◽  
Sergey Fomel ◽  
Wenkai Lu

When plane-wave destruction (PWD) is implemented by implicit finite differences, the local slope is estimated by an iterative algorithm. We propose an analytical estimator of the local slope that is based on convergence analysis of the iterative algorithm. Using the analytical estimator, we design a noniterative method to estimate slopes by a three-point PWD filter. Compared with the iterative estimation, the proposed method needs only one regularization step, which reduces computation time significantly. With directional decoupling of the plane-wave filter, the proposed algorithm is also applicable to 3D slope estimation. We present synthetic and field experiments to demonstrate that the proposed algorithm can yield a correct estimation result with shorter computational time.


Author(s):  
Jérôme Limido ◽  
Mohamed Trabia ◽  
Shawoon Roy ◽  
Brendan O’Toole ◽  
Richard Jennings ◽  
...  

A series of experiments were performed to study plastic deformation of metallic plates under hypervelocity impact at the University of Nevada, Las Vegas (UNLV) Center for Materials and Structures using a two-stage light gas gun. In these experiments, cylindrical Lexan projectiles were fired at A36 steel target plates with velocities range of 4.5–6.0 km/s. Experiments were designed to produce a front side impact crater and a permanent bulging deformation on the back surface of the target without inducing complete perforation of the plates. Free surface velocities from the back surface of target plate were measured using the newly developed Multiplexed Photonic Doppler Velocimetry (MPDV) system. To simulate the experiments, a Lagrangian-based smooth particle hydrodynamics (SPH) is typically used to avoid the problems associated with mesh instability. Despite their intrinsic capability for simulation of violent impacts, particle methods have a few drawbacks that may considerably affect their accuracy and performance including, lack of interpolation completeness, tensile instability, and existence of spurious pressure. Moreover, computational time is also a strong limitation that often necessitates the use of reduced 2D axisymmetric models. To address these shortcomings, IMPETUS Afea Solver® implemented a newly developed SPH formulation that can solve the problems regarding spurious pressures and tensile instability. The algorithm takes full advantage of GPU Technology for parallelization of the computation and opens the door for running large 3D models (20,000,000 particles). The combination of accurate algorithms and drastically reduced computation time now makes it possible to run a high fidelity hypervelocity impact model.


Jurnal INKOM ◽  
2014 ◽  
Vol 8 (1) ◽  
pp. 29 ◽  
Author(s):  
Arnida Lailatul Latifah ◽  
Adi Nurhadiyatna

This paper proposes parallel algorithms for precipitation of flood modelling, especially applied in spatial rainfall distribution. As an important input in flood modelling, spatial distribution of rainfall is always needed as a pre-conditioned model. In this paper two interpolation methods, Inverse distance weighting (IDW) and Ordinary kriging (OK) are discussed. Both are developed in parallel algorithms in order to reduce the computational time. To measure the computation efficiency, the performance of the parallel algorithms are compared to the serial algorithms for both methods. Findings indicate that: (1) the computation time of OK algorithm is up to 23% longer than IDW; (2) the computation time of OK and IDW algorithms is linearly increasing with the number of cells/ points; (3) the computation time of the parallel algorithms for both methods is exponentially decaying with the number of processors. The parallel algorithm of IDW gives a decay factor of 0.52, while OK gives 0.53; (4) The parallel algorithms perform near ideal speed-up.


Sign in / Sign up

Export Citation Format

Share Document