A Prototype for Brazilian Bankcheck Recognition

Author(s):  
Luan L. Lee ◽  
Miguel G. Lizarraga ◽  
Natanael R. Gomes ◽  
Alessandro L. Koerich

This paper describes a prototype for Brazilian bankcheck recognition. The description is divided into three topics: bankcheck information extraction, digit amount recognition and signature verification. In bankcheck information extraction, our algorithms provide signature and digit amount images free of background patterns and bankcheck printed information. In digit amount recognition, we dealt with the digit amount segmentation and implementation of a complete numeral character recognition system involving image processing, feature extraction and neural classification. In signature verification, we designed and implemented a static signature verification system suitable for banking and commercial applications. Our signature verification algorithm is capable of detecting both simple, random and skilled forgeries. The proposed automatic bankcheck recognition prototype was intensively tested by real bankcheck data as well as simulated data providing the following performance results: for skilled forgeries, 4.7% equal error rate; for random forgeries, zero Type I error and 7.3% Type II error; for bankcheck numerals, 92.7% correct recognition rate.

Author(s):  
Binod Kumar Prasad

Purpose of the study: The purpose of this work is to present an offline Optical Character Recognition system to recognise handwritten English numerals to help automation of document reading. It helps to avoid tedious and time-consuming manual typing to key in important information in a computer system to preserve it for a longer time. Methodology: This work applies Curvature Features of English numeral images by encoding them in terms of distance and slope. The finer local details of images have been extracted by using Zonal features. The feature vectors obtained from the combination of these features have been fed to the KNN classifier. The whole work has been executed using the MatLab Image Processing toolbox. Main Findings: The system produces an average recognition rate of 96.67% with K=1 whereas, with K=3, the rate increased to 97% with corresponding errors of 3.33% and 3% respectively. Out of all the ten numerals, some numerals like ‘3’ and ‘8’ have shown respectively lower recognition rates. It is because of the similarity between their structures. Applications of this study: The proposed work is related to the recognition of English numerals. The model can be used widely for recognition of any pattern like signature verification, face recognition, character or word recognition in another language under Natural Language Processing, etc. Novelty/Originality of this study: The novelty of the work lies in the process of feature extraction. Curves present in the structure of a numeral sample have been encoded based on distance and slope thereby presenting Distance features and Slope features. Vertical Delta Distance Coding (VDDC) and Horizontal Delta Distance Coding (HDDC) encode a curve from vertical and horizontal directions to reveal concavity and convexity from different angles.


Author(s):  
Manish M. Kayasth ◽  
Bharat C. Patel

The entire character recognition system is logically characterized into different sections like Scanning, Pre-processing, Classification, Processing, and Post-processing. In the targeted system, the scanned image is first passed through pre-processing modules then feature extraction, classification in order to achieve a high recognition rate. This paper describes mainly on Feature extraction and Classification technique. These are the methodologies which play an important role to identify offline handwritten characters specifically in Gujarati language. Feature extraction provides methods with the help of which characters can identify uniquely and with high degree of accuracy. Feature extraction helps to find the shape contained in the pattern. Several techniques are available for feature extraction and classification, however the selection of an appropriate technique based on its input decides the degree of accuracy of recognition. 


Author(s):  
Youssef Ouadid ◽  
Abderrahmane Elbalaoui ◽  
Mehdi Boutaounte ◽  
Mohamed Fakir ◽  
Brahim Minaoui

<p>In this paper, a graph based handwritten Tifinagh character recognition system is presented. In preprocessing Zhang Suen algorithm is enhanced. In features extraction, a novel key point extraction algorithm is presented. Images are then represented by adjacency matrices defining graphs where nodes represent feature points extracted by a novel algorithm. These graphs are classified using a graph matching method. Experimental results are obtained using two databases to test the effectiveness. The system shows good results in terms of recognition rate.</p>


2018 ◽  
Vol 7 (3.4) ◽  
pp. 90 ◽  
Author(s):  
Mandeep Singh ◽  
Karun Verma ◽  
Bob Gill ◽  
Ramandeep Kaur

Online handwriting character recognition is gaining attention from the researchers across the world because with the advent of touch based devices, a more natural way of communication is being explored. Stroke based online recognition system is proposed in this paper for a very complex Gurmukhi script. In this effort, recognition for 35 basic characters of Gurmukhi script has been implemented on the dataset of 2019 Gurmukhi samples. For this purpose, 32 stroke classes have been considered. Three types of features have been extracted. Hybrid of these features has been proposed in this paper to train the classification models. For stroke classification, three different classifiers namely, KNN, MLP and SVM are used and compared to evaluate the effectiveness of these models. A very promising “stroke recognition rate” of 94% by KNN, 95.04% by MLP and 95.04% by SVM has been obtained.  


Author(s):  
A. K. Sampath ◽  
N. Gomathi

Handwritten character recognition is most crucial one indulging in many of the applications like forensic search, searching historical manuscripts, mail sorting, bank check reading, tax form processing, book and handwritten notes transcription etc. The problem occurrence in the recognition is mainly because of the writing style variation, size variation (length and height), orientation angle etc. In this paper a probabilistic model based hybrid classifier is proposed for the character recognition combining the neural network and decision tree classifiers. In addition to the local gradient features i.e. histogram oriented feature and grid level feature, an additional feature called GLCM feature is extracted from the input image in the proposed recognition system and are concatenated for the image recognition procedure to encode color, shape, texture, local as well as the statistical information. These extracted features considered are given to the hybrid classifier which recognises the character. In the test set, recognition accuracy of 95% is achieved. The proposed probabilistic model based hybrid classifier tends to contribute more accurate character recognition rate compared to the existing character recognition system.


2020 ◽  
Vol 13 (2) ◽  
pp. 225-232 ◽  
Author(s):  
Mieczysław Szyszkowicz

AbstractIn this work, a new technique is proposed to study short-term exposure and adverse health effects. The presented approach uses hierarchical clusters with the following structure: each pair of two sequential days in 1 year is embedded in the year. We have 183 clusters per year with the embedded structure <year:2 days>. Time-series analysis is conducted using a conditional Poisson regression with the constructed clusters as a stratum. Unmeasured confounders such as seasonal and long-term trends are not modelled but are controlled by the structure of the clusters. The proposed technique is illustrated using four freely accessible databases, which contain complex simulated data. These data are available as the compressed R workspace files. Results based on the simulated data were very close to the truth based on the presented methodology. In addition, the case-crossover method with 1-month and 2-week window, and a conditional Poisson regression on 3-day clusters as a stratum, was also applied to the simulated data. Difficulties (high type I error rate) were observed for the case-crossover method in the presence of high concurvity in the simulated data. The proposed methods using various forms of a stratum were further applied to the Chicago mortality data. The considered methods have often different qualitative and quantitative estimations.


Author(s):  
Y. S. Huang ◽  
K. Liu ◽  
C. Y. Suen ◽  
Y. Y. Tang

This paper proposes a novel method which enables a Chinese character recognition system to obtain reliable recognition. In this method, two thresholds, i.e. class region thresholdRk and disambiguity thresholdAk, are used by each Chinese character k when the classifier is designed based on the nearest neighbor rule, where Rk defines the pattern distribution region of character k, and Ak prevents the samples not belonging to character k from being ambiguously recognized as character k. A novel algorithm to derive the appropriate thresholds Ak and Rk is developed so that a better recognition reliability can be obtained through iterative learning. Experiments performed on the ITRI printed Chinese character database have achieved highly reliable recognition performance (such as 0.999 reliability with a 95.14% recognition rate), which shows the feasibility and effectiveness of the proposed method.


Mathematics ◽  
2018 ◽  
Vol 6 (11) ◽  
pp. 269 ◽  
Author(s):  
Sergio Camiz ◽  
Valério Pillar

The identification of a reduced dimensional representation of the data is among the main issues of exploratory multidimensional data analysis and several solutions had been proposed in the literature according to the method. Principal Component Analysis (PCA) is the method that has received the largest attention thus far and several identification methods—the so-called stopping rules—have been proposed, giving very different results in practice, and some comparative study has been carried out. Some inconsistencies in the previous studies led us to try to fix the distinction between signal from noise in PCA—and its limits—and propose a new testing method. This consists in the production of simulated data according to a predefined eigenvalues structure, including zero-eigenvalues. From random populations built according to several such structures, reduced-size samples were extracted and to them different levels of random normal noise were added. This controlled introduction of noise allows a clear distinction between expected signal and noise, the latter relegated to the non-zero eigenvalues in the samples corresponding to zero ones in the population. With this new method, we tested the performance of ten different stopping rules. Of every method, for every structure and every noise, both power (the ability to correctly identify the expected dimension) and type-I error (the detection of a dimension composed only by noise) have been measured, by counting the relative frequencies in which the smallest non-zero eigenvalue in the population was recognized as signal in the samples and that in which the largest zero-eigenvalue was recognized as noise, respectively. This way, the behaviour of the examined methods is clear and their comparison/evaluation is possible. The reported results show that both the generalization of the Bartlett’s test by Rencher and the Bootstrap method by Pillar result much better than all others: both are accounted for reasonable power, decreasing with noise, and very good type-I error. Thus, more than the others, these methods deserve being adopted.


2019 ◽  
pp. 014544551986021 ◽  
Author(s):  
Antonia R. Giannakakos ◽  
Marc J. Lanovaz

Single-case experimental designs often require extended baselines or the withdrawal of treatment, which may not be feasible or ethical in some practical settings. The quasi-experimental AB design is a potential alternative, but more research is needed on its validity. The purpose of our study was to examine the validity of using nonoverlap measures of effect size to detect changes in AB designs using simulated data. In our analyses, we determined thresholds for three effect size measures beyond which the type I error rate would remain below 0.05 and then examined whether using these thresholds would provide sufficient power. Overall, our analyses show that some effect size measures may provide adequate control over type I error rate and sufficient power when analyzing data from AB designs. In sum, our results suggest that practitioners may use quasi-experimental AB designs in combination with effect size to rigorously assess progress in practice.


Sign in / Sign up

Export Citation Format

Share Document