character segmentation
Recently Published Documents


TOTAL DOCUMENTS

398
(FIVE YEARS 93)

H-INDEX

21
(FIVE YEARS 3)

2022 ◽  
Vol 12 (2) ◽  
pp. 853
Author(s):  
Cheng-Jian Lin ◽  
Yu-Cheng Liu ◽  
Chin-Ling Lee

In this study, an automatic receipt recognition system (ARRS) is developed. First, a receipt is scanned for conversion into a high-resolution image. Receipt characters are automatically placed into two categories according to the receipt characteristics: printed and handwritten characters. Images of receipts with these characters are preprocessed separately. For handwritten characters, template matching and the fixed features of the receipts are used for text positioning, and projection is applied for character segmentation. Finally, a convolutional neural network is used for character recognition. For printed characters, a modified You Only Look Once (version 4) model (YOLOv4-s) executes precise text positioning and character recognition. The proposed YOLOv4-s model reduces downsampling, thereby enhancing small-object recognition. Finally, the system produces recognition results in a tax declaration format, which can upload to a tax declaration system. Experimental results revealed that the recognition accuracy of the proposed system was 80.93% for handwritten characters. Moreover, the YOLOv4-s model had a 99.39% accuracy rate for printed characters; only 33 characters were misjudged. The recognition accuracy of the YOLOv4-s model was higher than that of the traditional YOLOv4 model by 20.57%. Therefore, the proposed ARRS can considerably improve the efficiency of tax declaration, reduce labor costs, and simplify operating procedures.


Author(s):  
Saurabh Ravindra Nikam

Abstract: In this paper Segmentation is one the most important process which decides the success of character recognition fashion. Segmentation is used to putrefy an image of a sequence of characters into sub images of individual symbols by segmenting lines and words. In segmentation image is partitioned into multiple corridor. With respect to the segmentation of handwritten words into characters it's a critical task because of complexity of structural features and kinds in writing styles. Due to this without segmentation these touching characters, it's delicate to fete the individual characters, hence arises the need for segmentation of touching characters in a word. Then we consider Marathi words and Marathi Numbers for segmentation. The algorithm is use for Segmentation of lines and also characters. The segmented characters are also stores in result variable. First it Separate the lines and also it Separate the characters from the input image. This procedure is repeated till end of train. Keywords: Image Segmentation, Handwritten Marathi Characters, Marathi Numbers, OCR.


Author(s):  
Manoj Kumar Dixit

Text detection in video frames provide highly condensed information about the content of the video and it is useful for video seeking, browsing, retrieval and understanding video text in large video databases. In this paper, we propose a hybrid method that it automatically detects segments and recognizes the text present in the video. Detection is done by using laplacian method based on wavelet and color features. Segmentation of detected text is divided into two modules Line segmentation and Character segmentation. Line segmentation is done by using mathematical statistical method based on projection profile analysis. In line segmentation, multiple lines of text in video frame obtained from text detection are segmented into single line. Character segmentation is done by using Connected Component. Analysis (CCA) and Vertical Projection Profile Analysis. The input for character segmentation is the line of text obtained from line segmentation, in which all the characters in the line are segmented separately for recognition. Optical character recognition is Processed by using template matching and correlation technique. Template matching is performed by comparing an input character with a set of templates, each comparison results in a similarity measure between the input characters with a set of templates. After all templates have been compared with the observed character image, the character’s identity is assigned with the most similar template based on correlation. Eventually, the text in video frame is detected, segmented, and processed to OCR for recognition.


Author(s):  
Tarun Kumar

Automatic Number Plate Recognition (ANPR) is an image processing technique that is used to extract the symbols (characters and digits) embedded on the number (license) plate to identify the vehicles. Huge numbers of ANPR techniques have been proposed by various researchers in the past. Most of the ANPR techniques are designed for restricted conditions due to the diversity of the license plate styles, environmental conditions etc. Not every technique is suited for all kinds of conditions. In general, the ANPR technique comprises of the following three stages; license plate detection (LPD); character segmentation; and character recognition. There exist a wide variety of techniques for carrying out each of the steps of the ANPR. Some score over others. This paper presents a State-of-the-Art survey of the various leading LPD techniques that exist today and an attempt has been made to summarize these techniques based on pros and cons and their limitations. Each technique is classified based on the features used at each stage of LPD. This survey shall help provide future direction towards the development of efficient and accurate techniques for ANPR. It shall also assist in identifying and shortlisting the methodologies that are best suited for the particular problem domain.


Vehicles ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 646-660
Author(s):  
Mduduzi Manana ◽  
Chunling Tu ◽  
Pius Adewale Owolawi

This paper presents licence-plate recognition for identifying vehicles with similar licence-plates. The method uses a modified licence-plate recognition pipeline, with licence-plate template matching replacing character segmentation and recognition. Only edge detection is used, combined with a method for calculating line ratio to locate and extract licence-plates. The extracted licence-plate templates are then compared for licence-plate matching. The results show that the method performs well in differing circumstances, and that it is computationally cost-effective. Results also show that licence-plate template matching is a reliable method of identifying similar vehicles, and has a lower computational cost when compared with character segmentation and recognition.


2021 ◽  
Vol 13 (1) ◽  
pp. 89-99
Author(s):  
Marcelo Eidi Imamura ◽  
Francisco Assis da Silva ◽  
Leandro Luiz de Almeida ◽  
Danillo Roberto Pereira ◽  
Almir Olivette Artero ◽  
...  

Brazil has a large fleet of vehicles running daily along urban roads and highways, which requires the use of some computational solution to assist in control and management. In this work we developed an application to detect and recognize real-time licenseplates with various application possibilities. The methodology developed in this work has three main stages: plate detection, character segmentation and recognition. For the detection step we used the YOLO library, which makes use of machine learning techniques to detect objects in real time. YOLO was trained using a dataset with plate images in different environments. In the segmentation stage, the individual characters contained in the plate were separated, using image processing methods. In the last stage, character recognition was performed using two convolutional neural networks, obtaining a hit rate of 83.33%.


2021 ◽  
Vol 14 (4) ◽  
pp. 11
Author(s):  
Kayode David Adedayo ◽  
Ayomide Oluwaseyi Agunloye

License plate detection and recognition are critical components of the development of a connected Intelligent transportation system, but are underused in developing countries because to the associated costs. Existing license plate detection and recognition systems with high accuracy require the usage of Graphical Processing Units (GPU), which may be difficult to come by in developing nations. Single stage detectors and commercial optical character recognition engines, on the other hand, are less computationally expensive and can achieve acceptable detection and recognition accuracy without the use of a GPU. In this work, a pretrained SSD model and a tesseract tessdata-fast traineddata were fine-tuned on a dataset of more than 2,000 images of vehicles with license plate. These models were combined with a unique image preprocessing algorithm for character segmentation and tested using a general-purpose personal computer on a new collection of 200 automobiles with license plate photos. On this testing set, the plate detection system achieved a detection accuracy of 99.5 % at an IOU threshold of 0.45 while the OCR engine successfully recognized all characters on 150 license plates, one character incorrectly on 24 license plates, and two or more incorrect characters on 26 license plates. The detection procedure took an average of 80 milliseconds, while the character segmentation and identification stages took an average of 95 milliseconds, resulting in an average processing time of 175 milliseconds per image, or 6 photos per second. The obtained results are suitable for real-time traffic applications.


Author(s):  
Pratik U Patil ◽  
Zaid Z ◽  
Kajol N Khot ◽  
Sikandar N ◽  
Seema G

Handwritten Mathematical Expression recognition and grading system is a challenging task in the field of pattern recognition. A lot of researchers have already worked on Handwritten Mathematical Expression recognition and have used various classifiers. In past, Convolutional Neural Network, also called CNN, has been highly used for recognizing patterns. In this paper, We propose an idea to recognize HME and evaluate offline using CNN for classification. The steps included are, first the worksheet is scanned and is sent to the work-spaces detection module where it will return all the rectangular work-spaces from the given worksheet, then the workspaces are sent to the line extraction module to extract all the lines. The extracted lines are then passed to the character segmentation module where it will segment the character and then characters will be classified using deep learning model DCCNN. Finally, the evaluation module will assess the line and draw a green/red bounding box depending on whether the line is correct or not.


Sign in / Sign up

Export Citation Format

Share Document