A multimodal architecture using Adapt‐HKFCT segmentation and feature‐based chaos integrated deep neural networks (Chaos‐DNN‐SPOA) for contactless biometricpalm vein recognition system

Author(s):  
M. Rajalakshmi ◽  
K. Annapurani Panaiyappan
Author(s):  
S. A. Sakulin ◽  
A. N. Alfimtsev ◽  
D. A. Loktev ◽  
A. O. Kovalenko ◽  
V. V. Devyatkov

Recently, human recognition systems based on deep machine learning, in particular, on the basis of deep neural networks, have become widespread. In this regard, research has become relevant in the field of protection against recognition by such systems. In this article a method of designing a specially selected type of camouflage applied to clothing, which will protect a person both from recognition by a human observer and from a deep neural network recognition system is proposed. This type of camouflage is constructed on the basis of competitive examples that are generated by a deep neural network. The article describes experiments on human protection from recognition by Faster-RCNN (Regional Convolution Neural Networks) Inception V2 and Faster-RCNN ResNet101 systems. However, the implementation of camouflage is considered on a macro level, which assesses the combination of the camouflage and background, and the micro level which analyzes the relationship between the properties of individual regions of the camouflage properties of the adjacent regions, with constraints on their continuity, smoothness, closure, asymmetry. The dependence of camouflage characteristics on the conditions of observation of the object and the environment is also considered: the transparency of the atmosphere, the intensity of pixels of the sky horizon and the background, the level of contrast of the background and the camouflaged object, the distance to the object. As an example of a possible attack, a “black box” attack, which involves preliminary testing of generated adversarial examples on a target recognition system without knowledge of the internal structure of this system, is considered. Results of these experiments showed the high efficiency of the proposed method in the virtual world, when there is access to each pixel of the image supplied to the input systems. In the real world, results are less impressive, which can be explained by the distortion of colors when printing on the fabric, as well as the lack of spatial resolution of this print.


Author(s):  
Ke Zhang ◽  
Yu Su ◽  
Jingyu Wang ◽  
Sanyu Wang ◽  
Yanhua Zhang

At present, the environment sound recognition system mainly identifies environment sounds with deep neural networks and a wide variety of auditory features. Therefore, it is necessary to analyze which auditory features are more suitable for deep neural networks based ESCR systems. In this paper, we chose three sound features which based on two widely used filters:the Mel and Gammatone filter banks. Subsequently, the hybrid feature MGCC is presented. Finally, a deep convolutional neural network is proposed to verify which features are more suitable for environment sound classification and recognition tasks. The experimental results show that the signal processing features are better than the spectrogram features in the deep neural network based environmental sound recognition system. Among all the acoustic features, the MGCC feature achieves the best performance than other features. Finally, the MGCC-CNN model proposed in this paper is compared with the state-of-the-art environmental sound classification models on the UrbanSound 8K dataset. The results show that the proposed model has the best classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document