A Fast Thinning Algorithm of Square Hmong Character Handwriting Using Template Matching Mechanism

Author(s):  
Liangbin Cao ◽  
Xiulai Song ◽  
Yanru Zhong ◽  
Xiaonan Luo ◽  
Ji Li
2020 ◽  
Vol 06 (1) ◽  
pp. 12-21
Author(s):  
Saif Ur Rehman ◽  
Moiz Ahmad ◽  
Asif Nawaz ◽  
Tariq Ali

Introduction: Recognition of Vehicle License Number Plates (VLNP) is an important task. It is valuable in numerous applications, such as entrance admission, security, parking control, road traffic control, and speed control. An ANPR (Automatic Number Plate Recognition) is a system in which the image of the vehicle is captured through high definition cameras. The image is then used to detect vehicles of any type (car, van, bus, truck, and bike, etc.), its’ color (white, black, blue, etc.), and its’ model (Toyota Corolla, Honda Civic etc.). Furthermore, this image is processed using segmentation and OCR techniques to get the vehicle registration number in form of characters. Once the required information is extracted from VLNP, this information is sent to the control center for further processing. Aim: ANPR is a challenging problem, especially when the number plates have varying sizes, the number of lines, fonts, background diversity, etc. Different ANPR systems have been suggested for different countries, including Iran, Malaysia, and France. However, only a limited work exists for Pakistan vehicles. Therefore, in this study, we aim to propose a novel ANPR framework for Pakistan VLNP recognition. Methods: The proposed ANPR system functions in three different steps: (i) - Number Plate Localization (NPL); (ii)- Character Segmentation (CS); and (iii)- Optical Character Recognition (OCR), involving template-matching mechanism. The proposed ANPR approach scans the number plate and instantly checks against database records of vehicles of interest. It can further extract the real=time information of driver and vehicle, for instance, license of the driver and token taxes of vehicles are paid or not, etc. Results: Finally, the proposed ANPR system has been evaluated on several real-time images from various formats of number plates practiced in Pakistan territory. In addition to this, the proposed ANPR system has been compared with the existing ANPR systems proposed specifically for Pakistani licensed number plates. Conclusion: The proposed ANPR Model has both time and money-saving profit for law enforcement agencies and private organizations for improving homeland security. There is a need to expand the types of vehicles that can be detected: trucks, buses, scooters, bikes. This technology can be further improved to detect the crashed vehicle’s number plate in an accident and alert the closest hospital and police station about the accident, thus saving lives.


2020 ◽  
Vol 8 (4) ◽  
pp. 393
Author(s):  
I Made Pegi Kurnia Amerta ◽  
I Gede Arta Wibawa

The game of writing letters is an attractive learning media. Each person's handwriting is different. So that it requires a data classification method to match the test data with the template that is the alphabet letter. In this journal using a template matching cross-correlation for data classification. Before data classification, preprocessing is done in the form of resize and threshold to produce binary images. Thinning process is also carried out to thin the letters. The thinning algorithm used is stentiford. From the accuracy testing obtained an average value of 70.38%. With the number of letters that continue to experience errors namely the characters H, K, M, and Y.


1992 ◽  
Vol 03 (04) ◽  
pp. 395-404 ◽  
Author(s):  
REI-YAO WU ◽  
WEN-HSIANG TSAI

A single-layer recurrent neural network is proposed to perform thinning of binary images. This network iteratively removes the contour points of an object shape by template matching. The set of templates is specially designed for a one-pass parallel thinning algorithm. The proposed neural network produce the same results as the algorithm. Neurons in the neural network performs a sigma-pi function to collect inputs. To obtain this function, the templates used in the algorithm are transformed to equivalent Boolean expressions. After the neural network converges, a perfectly 8-connected skeleton is derived. Good experimental results show the feasibility of the proposed approach.


2002 ◽  
Vol 205 (4) ◽  
pp. 549-557 ◽  
Author(s):  
Stefan Schuster ◽  
Silke Amtsfeld

SUMMARYSeveral insects use template-matching systems to recognize objects or environmental landmarks by comparing actual and stored retinal images. Such systems are not viewpoint-invariant and are useful only when the locations in which the images have been stored and where they are later retrieved coincide. Here, we describe that a vertebrate, the weakly electric fish Gnathonemus petersii, appears to use template-matching to recognize visual patterns that it had previously viewed from a fixed vantage point. This fish is nocturnal and uses its electrical sense to find its way in the dark, yet it has functional vision that appears to be well adapted to dim light conditions. We were able to train three fish in a two-alternative forced-choice procedure to discriminate a rewarded from an unrewarded visual pattern. From its daytime shelter, each fish viewed two visual patterns placed at a set distance behind a transparent Plexiglas screen that closed the shelter. When the screen was lifted, the fish swam towards one of the patterns to receive a food reward or to be directed back into its shelter. Successful pattern discrimination was limited to low ambient light intensities of approximately 10 lx and to pattern sizes subtending a visual angle greater than 3°. To analyze the characteristics used by the fish to discriminate the visual training patterns, we performed transfer tests in which the training patterns were replaced by other patterns. The results of all such transfer tests can best be explained by a template-matching mechanism in which the fish stores the view of the rewarded training pattern and chooses from two other patterns the one whose retinal appearance best matches the stored view.


Sign in / Sign up

Export Citation Format

Share Document