scholarly journals Offline Signature Identification and Verification Based on Capsule Representations

2020 ◽  
Vol 20 (5) ◽  
pp. 60-67
Author(s):  
Dilara Gumusbas ◽  
Tulay Yildirim

AbstractOffline signature is one of the frequently used biometric traits in daily life and yet skilled forgeries are posing a great challenge for offline signature verification. To differentiate forgeries, a variety of research has been conducted on hand-crafted feature extraction methods until now. However, these methods have recently been set aside for automatic feature extraction methods such as Convolutional Neural Networks (CNN). Although these CNN-based algorithms often achieve satisfying results, they require either many samples in training or pre-trained network weights. Recently, Capsule Network has been proposed to model with fewer data by using the advantage of convolutional layers for automatic feature extraction. Moreover, feature representations are obtained as vectors instead of scalar activation values in CNN to keep orientation information. Since signature samples per user are limited and feature orientations in signature samples are highly informative, this paper first aims to evaluate the capability of Capsule Network for signature identification tasks on three benchmark databases. Capsule Network achieves 97 96, 94 89, 95 and 91% accuracy on CEDAR, GPDS-100 and MCYT databases for 64×64 and 32×32 resolutions, which are lower than usual, respectively. The second aim of the paper is to generalize the capability of Capsule Network concerning the verification task. Capsule Network achieves average 91, 86, and 89% accuracy on CEDAR, GPDS-100 and MCYT databases for 64×64 resolutions, respectively. Through this evaluation, the capability of Capsule Network is shown for offline verification and identification tasks.

2018 ◽  
Vol 7 (4.12) ◽  
pp. 69 ◽  
Author(s):  
Kamlesh Kumari ◽  
Sanjeev Rana

Signature verification is important in banking, legal, financial transactions for security purpose. Offline signature verification is a complex task because active information i.e. temporal information is missing in static image. There is no standard feature extraction method for offline signature identification as in case of other behavior modalities e.g. in automatic speech recognition like LPCC (Linear Predictive Ceptral Coefficients).Our research presents an intelligent algorithm for feature extraction based on image difference of genuine signature image and questioned signature image. Six features i.e. average object area, entropy, standard deviation, mean, Euler no., and area are analyzed. Best results are reported using combination of Average Object Area, Mean, Euler No. and Area. CEDAR (Center of Excellence for Document Analysis) database is used for offline signature verification. The database consists of static signature samples taken from 55 users. The Proposed algorithm is quite efficient as it is less computationally. Experiments are performed with both models i.e. Writer-Independent (WI) system and Writer-Dependent.  


2016 ◽  
Vol 78 (8-2) ◽  
Author(s):  
Aini Najwa Azmi ◽  
Dewi Nasien ◽  
Azurah Abu Samah

Over recent years, there has been an explosive growth of interest in the pattern recognition. For example, handwritten signature is one of human biometric that can be used in many areas in terms of access control and security. However, handwritten signature is not a uniform characteristic such as fingerprint, iris or vein. It may change to several factors; mood, environment and age. Signature Verification System (SVS) is a part of pattern recognition that can be a solution for such situation. The system can be decomposed into three stages: data acquisition and preprocessing, feature extraction and verification. This paper presents techniques for SVS that uses Freeman chain code (FCC) as data representation. In the first part of feature extraction stage, the FCC was extracted by using boundary-based style on the largest contiguous part of the signature images. The extracted FCC was divided into four, eight or sixteen equal parts. In the second part of feature extraction, six global features were calculated. Finally, verification utilized k-Nearest Neighbour (k-NN) to test the performance. MCYT bimodal database was used in every stage in the system. Based on our systems, the best result achieved was False Rejection Rate (FRR) 14.67%, False Acceptance Rate (FAR) 15.83% and Equal Error Rate (EER) 0.43% with shortest computation, 7.53 seconds and 47 numbers of features.


Author(s):  
Kennedy Gyimah ◽  
Justice Kwame Appati ◽  
Kwaku Darkwah ◽  
Kwabena Ansah

In the field of pattern recognition, automatic handwritten signature verification is of the essence. The uniqueness of each person’s signature makes it a preferred choice of human biometrics. However, the unavoidable side-effect is that they can be misused to feign data authenticity. In this paper, we present an improved feature extraction vector for offline signature verification system by combining features of grey level occurrence matrix (GLCM) and properties of image regions. In evaluating the performance of the proposed scheme, the resultant feature vector is tested on a support vector machine (SVM) with varying kernel functions. However, to keep the parameters of the kernel functions optimized, the sequential minimal optimization (SMO) and the least square method was used. Results of the study explained that the radial basis function (RBF) coupled with SMO best support the improved featured vector proposed.


2006 ◽  
Vol 24 (2) ◽  
pp. 189-200 ◽  
Author(s):  
Geoff Luck ◽  
Petri Toiviainen

Previous work suggests that the perception of a visual beat in conductors’ gestures is related to certain physical characteristics of the movements they produce, most notably to periods of negative acceleration, and low position in the vertical axis. These findings are based on studies that have presented participants with somewhat simple gestures, and in which participants have been required to simply tap in time with the beat. Thus, it is not clear how generalizable these findings are to real-world conducting situations, in which a conductor uses considerably more complex gestures to direct an ensemble of musicians playing actual instruments. The aims of the present study were to examine the features of conductors’ gestures with which ensemble musicians synchronize their performance in an ecologically valid setting and to develop automatic feature extraction methods for the analysis of audio and movement data. An optical motion capture system was used to record the gestures of an expert conductor directing an ensemble of expert musicians over a 20-minute period. A simultaneous audio recording of the performance of the ensemble was also made and synchronized with the motion capture data. Four short excerpts were selected for analysis, two in which the conductor communicated the beat with high clarity, and two in which the beat was communicated with low clarity. Twelve movement variables were computationally extracted from the movement data and cross-correlated with the pulse of the ensemble’s performance, the latter based on the spectral flux of the audio signal. Results of the analysis indicated that the ensemble’s performance tended to be most highly synchronized with periods of maximal deceleration along the trajectory, followed by periods of high vertical velocity (a higher correlation than deceleration but a longer delay).


Sign in / Sign up

Export Citation Format

Share Document