scholarly journals Cross-Sensor Fingerprint Matching Using Siamese Network and Adversarial Learning

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3657
Author(s):  
Adhwa Alrashidi ◽  
Ashwaq Alotaibi ◽  
Muhammad Hussain ◽  
Helala AlShehri ◽  
Hatim A. AboAlSamh ◽  
...  

The fingerprint is one of the leading biometric modalities that is used worldwide for authenticating the identity of persons. Over time, a lot of research has been conducted to develop automatic fingerprint verification techniques. However, due to different authentication needs, the use of different sensors and the fingerprint verification systems encounter cross-sensor matching or sensor interoperability challenges, where different sensors are used for the enrollment and query phases. The challenge is to develop an efficient, robust and automatic system for cross-sensor matching. This paper proposes a new cross-matching system (SiameseFinger) using the Siamese network that takes the features extracted using the Gabor-HoG descriptor. The proposed Siamese network is trained using adversarial learning. The SiameseFinger was evaluated on two benchmark public datasets FingerPass and MOLF. The results of the experiments presented in this paper indicate that SiameseFinger achieves a comparable performance with that of the state-of-the-art methods.

2018 ◽  
Vol 30 (03) ◽  
pp. 1850019
Author(s):  
Fatemeh Alimardani ◽  
Reza Boostani

Fingerprint verification systems have attracted much attention in secure organizations; however, conventional methods still suffer from unconvincing recognition rate for noisy fingerprint images. To design a robust verification system, in this paper, wavelet and contourlet transforms (CTS) were suggested as efficient feature extraction techniques to elicit a coverall set of descriptive features to characterize fingerprint images. Contourlet coefficients capture the smooth contours of fingerprints while wavelet coefficients reveal its rough details. Due to the high dimensionality of the elicited features, across group variance (AGV), greedy overall relevancy (GOR) and Davis–Bouldin fast feature reduction (DB-FFR) methods were adopted to remove the redundant features. These features were applied to three different classifiers including Boosting Direct Linear Discriminant Analysis (BDLDA), Support Vector Machine (SVM) and Modified Nearest Neighbor (MNN). The proposed method along with state-of-the-art methods were evaluated, over the FVC2004 dataset, in terms of genuine acceptance rate (GAR), false acceptance rate (FAR) and equal error rate (EER). The features selected by AGV were the most significant ones and provided 95.12% GAR. Applying the selected features, by the GOR method, to the modified nearest neighbor, resulted in average EER of [Formula: see text]%, which outperformed the compared methods. The comparative results imply the statistical superiority ([Formula: see text]) of the proposed approach compared to the counterparts.


DYNA ◽  
2020 ◽  
Vol 87 (213) ◽  
pp. 9-16
Author(s):  
Franklin Alexander Sepulveda Sepulveda ◽  
Dagoberto Porras-Plata ◽  
Milton Sarria-Paja

Current state-of-the-art speaker verification (SV) systems are known to be strongly affected by unexpected variability presented during testing, such as environmental noise or changes in vocal effort. In this work, we analyze and evaluate articulatory information of the tongue's movement as a means to improve the performance of speaker verification systems. We use a Spanish database, where besides the speech signals, we also include articulatory information that was acquired with an ultrasound system. Two groups of features are proposed to represent the articulatory information, and the obtained performance is compared to an SV system trained only with acoustic information. Our results show that the proposed features contain highly discriminative information, and they are related to speaker identity; furthermore, these features can be used to complement and improve existing systems by combining such information with cepstral coefficients at the feature level.


2021 ◽  
Vol 27 (3) ◽  
pp. 57-70
Author(s):  
Damjan M. Rakanovic ◽  
Vuk Vranjkovic ◽  
Rastislav J. R. Struharik

Paper proposes a two-step Convolutional Neural Network (CNN) pruning algorithm and resource-efficient Field-programmable gate array (FPGA) CNN accelerator named “Argus”. The proposed CNN pruning algorithm first combines similar kernels into clusters, which are then pruned using the same regular pruning pattern. The pruning algorithm is carefully tailored for FPGAs, considering their resource characteristics. Regular sparsity results in high Multiply-accumulate (MAC) efficiency, reducing the amount of logic required to balance workloads among different MAC units. As a result, the Argus accelerator requires about 170 Look-up tables (LUTs) per Digital Signal Processor (DSP) block. This number is close to the average LUT/DPS ratio for various FPGA families, enabling balanced resource utilization when implementing Argus. Benchmarks conducted using Xilinx Zynq Ultrascale + Multi-Processor System-on-Chip (MPSoC) indicate that Argus is achieving up to 25 times higher frames per second than NullHop, 2 and 2.5 times higher than NEURAghe and Snowflake, respectively, and 2 times higher than NVDLA. Argus shows comparable performance to MIT’s Eyeriss v2 and Caffeine, requiring up to 3 times less memory bandwidth and utilizing 4 times fewer DSP blocks, respectively. Besides the absolute performance, Argus has at least 1.3 and 2 times better GOP/s/DSP and GOP/s/Block-RAM (BRAM) ratios, while being competitive in terms of GOP/s/LUT, compared to some of the state-of-the-art solutions.


2020 ◽  
Vol 17 (3) ◽  
pp. 849-865
Author(s):  
Zhongqin Bi ◽  
Shuming Dou ◽  
Zhe Liu ◽  
Yongbin Li

Neural network methods have been trained to satisfactorily learn user/product representations from textual reviews. A representation can be considered as a multiaspect attention weight vector. However, in several existing methods, it is assumed that the user representation remains unchanged even when the user interacts with products having diverse characteristics, which leads to inaccurate recommendations. To overcome this limitation, this paper proposes a novel model to capture the varying attention of a user for different products by using a multilayer attention framework. First, two individual hierarchical attention networks are used to encode the users and products to learn the user preferences and product characteristics from review texts. Then, we design an attention network to reflect the adaptive change in the user preferences for each aspect of the targeted product in terms of the rating and review. The results of experiments performed on three public datasets demonstrate that the proposed model notably outperforms the other state-of-the-art baselines, thereby validating the effectiveness of the proposed approach.


Author(s):  
Yizhen Chen ◽  
Haifeng Hu

Most existing segmentation networks are built upon a “ U -shaped” encoder–decoder structure, where the multi-level features extracted by the encoder are gradually aggregated by the decoder. Although this structure has been proven to be effective in improving segmentation performance, there are two main drawbacks. On the one hand, the introduction of low-level features brings a significant increase in calculations without an obvious performance gain. On the other hand, general strategies of feature aggregation such as addition and concatenation fuse features without considering the usefulness of each feature vector, which mixes the useful information with massive noises. In this article, we abandon the traditional “ U -shaped” architecture and propose Y-Net, a dual-branch joint network for accurate semantic segmentation. Specifically, it only aggregates the high-level features with low-resolution and utilizes the global context guidance generated by the first branch to refine the second branch. The dual branches are effectively connected through a Semantic Enhancing Module, which can be regarded as the combination of spatial attention and channel attention. We also design a novel Channel-Selective Decoder (CSD) to adaptively integrate features from different receptive fields by assigning specific channelwise weights, where the weights are input-dependent. Our Y-Net is capable of breaking through the limit of singe-branch network and attaining higher performance with less computational cost than “ U -shaped” structure. The proposed CSD can better integrate useful information and suppress interference noises. Comprehensive experiments are carried out on three public datasets to evaluate the effectiveness of our method. Eventually, our Y-Net achieves state-of-the-art performance on PASCAL VOC 2012, PASCAL Person-Part, and ADE20K dataset without pre-training on extra datasets.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3603
Author(s):  
Dasol Jeong ◽  
Hasil Park ◽  
Joongchol Shin ◽  
Donggoo Kang ◽  
Joonki Paik

Person re-identification (Re-ID) has a problem that makes learning difficult such as misalignment and occlusion. To solve these problems, it is important to focus on robust features in intra-class variation. Existing attention-based Re-ID methods focus only on common features without considering distinctive features. In this paper, we present a novel attentive learning-based Siamese network for person Re-ID. Unlike existing methods, we designed an attention module and attention loss using the properties of the Siamese network to concentrate attention on common and distinctive features. The attention module consists of channel attention to select important channels and encoder-decoder attention to observe the whole body shape. We modified the triplet loss into an attention loss, called uniformity loss. The uniformity loss generates a unique attention map, which focuses on both common and discriminative features. Extensive experiments show that the proposed network compares favorably to the state-of-the-art methods on three large-scale benchmarks including Market-1501, CUHK03 and DukeMTMC-ReID datasets.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4723
Author(s):  
Patrícia Bota ◽  
Chen Wang ◽  
Ana Fred ◽  
Hugo Silva

Emotion recognition based on physiological data classification has been a topic of increasingly growing interest for more than a decade. However, there is a lack of systematic analysis in literature regarding the selection of classifiers to use, sensor modalities, features and range of expected accuracy, just to name a few limitations. In this work, we evaluate emotion in terms of low/high arousal and valence classification through Supervised Learning (SL), Decision Fusion (DF) and Feature Fusion (FF) techniques using multimodal physiological data, namely, Electrocardiography (ECG), Electrodermal Activity (EDA), Respiration (RESP), or Blood Volume Pulse (BVP). The main contribution of our work is a systematic study across five public datasets commonly used in the Emotion Recognition (ER) state-of-the-art, namely: (1) Classification performance analysis of ER benchmarking datasets in the arousal/valence space; (2) Summarising the ranges of the classification accuracy reported across the existing literature; (3) Characterising the results for diverse classifiers, sensor modalities and feature set combinations for ER using accuracy and F1-score; (4) Exploration of an extended feature set for each modality; (5) Systematic analysis of multimodal classification in DF and FF approaches. The experimental results showed that FF is the most competitive technique in terms of classification accuracy and computational complexity. We obtain superior or comparable results to those reported in the state-of-the-art for the selected datasets.


2015 ◽  
Vol 3 (23) ◽  
pp. 12159-12162 ◽  
Author(s):  
M. L. Petrus ◽  
T. Bein ◽  
T. J. Dingemans ◽  
P. Docampo

EDOT-OMeTPA was prepared in a simple condensation reaction. When applied to perovskite solar cells, the new hole transporter shows comparable performance to state-of-the-art Spiro-OMeTAD; however the estimated cost contribution is two orders of magnitude lower.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Ruoxi Qin ◽  
Linyuan Wang ◽  
Wanting Yu ◽  
...  

In image classification of deep learning, adversarial examples where input is intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them. Different attack and defense strategies have been proposed to better research the mechanism of deep learning. However, those researches in these networks are only for one aspect, either an attack or a defense. There is in the improvement of offensive and defensive performance, and it is difficult to promote each other in the same framework. In this paper, we propose Cycle-Consistent Adversarial GAN (CycleAdvGAN) to generate adversarial examples, which can learn and approximate the distribution of the original instances and adversarial examples, especially promoting attackers and defenders to confront each other and improve their ability. For CycleAdvGAN, once the GeneratorA and D are trained, GA can generate adversarial perturbations efficiently for any instance, improving the performance of the existing attack methods, and GD can generate recovery adversarial examples to clean instances, defending against existing attack methods. We apply CycleAdvGAN under semiwhite-box and black-box settings on two public datasets MNIST and CIFAR10. Using the extensive experiments, we show that our method has achieved the state-of-the-art adversarial attack method and also has efficiently improved the defense ability, which made the integration of adversarial attack and defense come true. In addition, it has improved the attack effect only trained on the adversarial dataset generated by any kind of adversarial attack.


Sign in / Sign up

Export Citation Format

Share Document