scholarly journals IoT Cloud-Based Framework for Face Spoofing Detection with Deep Multicolor Feature Learning Model

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Sajad Einy ◽  
Cemil Oz ◽  
Yahya Dorostkar Navaei

A face-based authentication system has become an important topic in various fields of IoT applications such as identity validation for social care, crime detection, ATM access, computer security, etc. However, these authentication systems are vulnerable to different attacks. Presentation attacks have become a clear threat for facial biometric-based authentication and security applications. To address this issue, we proposed a deep learning approach for face spoofing detection systems in IoT cloud-based environment. The deep learning approach extracted features from multicolor space to obtain more information from the input face image regarding luminance and chrominance data. These features are combined and selected by the Minimum Redundancy Maximum Relevance (mRMR) algorithm to provide an efficient and discriminate feature set. Finally, the extracted deep color-based features of the face image are used for face spoofing detection in a cloud environment. The proposed method achieves stable results with less training data compared to conventional deep learning methods. This advantage of the proposed approach reduces the time of processing in the training phase and optimizes resource management in storing training data on the cloud. The proposed system was tested and evaluated based on two challenging public access face spoofing databases, namely, Replay-Attack and ROSE-Youtu. The experimental results based on these databases showed that the proposed method achieved satisfactory results compared to the state-of-the-art methods based on an equal error rate (EER) of 0.2% and 3.8%, respectively, for the Replay-Attack and ROSE-Youtu databases.

2020 ◽  
Vol 8 (5) ◽  
pp. 3309-3314

Nowadays, face biometric-based access control systems are becoming ubiquitous in daily life while they are still vulnerable to spoofing attacks. Developing robust and reliable methods to prevent such frauds is unavoidable. As deep learning techniques have achieved satisfactory performances in computer vision, they have also been applied to face spoofing detection. However, the numerous parameters in these deep learning-based detection methods cannot be updated to optimum due to limited data. In this paper,a highly accurate face spoof detection system using multiple features and deep learning is proposed. The input video is broken into frames using content-based frame extraction. From each frame, the face of the person is cropped.From the cropped images multiple features like Histogram of Gradients (HoG), Local Binary Pattern (LBP), Center Symmetric LBP (CSLBP), and Gray level co-occurrence Matrix (GLCM) are extracted to train the Convolutional Neural Network(CNN). Training and testing are performed separately by using collected sample data.Experiments on the standard spoof database called Replay-Attack database the proposed system outperform other state-of-the-art techniques, presenting great results in terms of attack detection.


2020 ◽  
Vol 34 (07) ◽  
pp. 11029-11036
Author(s):  
Jiabo Huang ◽  
Qi Dong ◽  
Shaogang Gong ◽  
Xiatian Zhu

Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.


2020 ◽  
Vol 12 (7) ◽  
pp. 1092
Author(s):  
David Browne ◽  
Michael Giering ◽  
Steven Prestwich

Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often have very different scales and orientation (viewing angle). Yet another is that the resulting networks may be very large, again making them prone to overfitting and unsuitable for deployment on memory- and energy-limited devices. We propose an efficient deep-learning approach to tackle these problems. We use transfer learning to compensate for the lack of data, and data augmentation to tackle varying scale and orientation. To reduce network size, we use a novel unsupervised learning approach based on k-means clustering, applied to all parts of the network: most network reduction methods use computationally expensive supervised learning methods, and apply only to the convolutional or fully connected layers, but not both. In experiments, we set new standards in classification accuracy on four remote-sensing and two scene-recognition image datasets.


The wide scale use of facial recognition systems has caused concerns about spoofing attacks. Security is essential requirement for a face recognition system to provide reliable protection against spoofing attacks. Spoofing happens in situations where someone tries to behave as an authorized user to obtain illicitly access the protected system to gain advantage over it. In order to identify spoofing attacks, face spoofing detection approaches have been used. Traditional face spoofing detection techniques are not good enough as most of them focus only on the gray scale information and discarding the color information. Here a face spoofing detection approach with color texture and edge analysis is presented. The approach for investigating the texture of input images, Local binary pattern and Edge Histogram descriptor are proposed. Experiments on a publicly available dataset, Replay attack, showed excellent results compared to existing works.


Author(s):  
Azar Abid Salih ◽  
Siddeeq Y. Ameen ◽  
Subhi R. M. Zeebaree ◽  
Mohammed A. M. Sadeeq ◽  
Shakir Fattah Kak ◽  
...  

Recently, computer networks faced a big challenge, which is that various malicious attacks are growing daily. Intrusion detection is one of the leading research problems in network and computer security. This paper investigates and presents Deep Learning (DL) techniques for improving the Intrusion Detection System (IDS). Moreover, it provides a detailed comparison with evaluating performance, deep learning algorithms for detecting attacks, feature learning, and datasets used to identify the advantages of employing in enhancing network intrusion detection.


2020 ◽  
Vol 34 (07) ◽  
pp. 12394-12401 ◽  
Author(s):  
Mingda Wu ◽  
Di Huang ◽  
Yuanfang Guo ◽  
Yunhong Wang

Recently, Human Attribute Recognition (HAR) has become a hot topic due to its scientific challenges and application potentials, where localizing attributes is a crucial stage but not well handled. In this paper, we propose a novel deep learning approach to HAR, namely Distraction-aware HAR (Da-HAR). It enhances deep CNN feature learning by improving attribute localization through a coarse-to-fine attention mechanism. At the coarse step, a self-mask block is built to roughly discriminate and reduce distractions, while at the fine step, a masked attention branch is applied to further eliminate irrelevant regions. Thanks to this mechanism, feature learning is more accurate, especially when heavy occlusions and complex backgrounds exist. Extensive experiments are conducted on the WIDER-Attribute and RAP databases, and state-of-the-art results are achieved, demonstrating the effectiveness of the proposed approach.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256782
Author(s):  
Yiting Tsai ◽  
Susan A. Baldwin ◽  
Bhushan Gopaluni

Much of the current research on supervised modelling is focused on maximizing outcome prediction accuracy. However, in engineering disciplines, an arguably more important goal is that of feature extraction, the identification of relevant features associated with the various outcomes. For instance, in microbial communities, the identification of keystone species can often lead to improved prediction of future behavioral shifts. This paper proposes a novel feature extractor based on Deep Learning, which is largely agnostic to underlying assumptions regarding the training data. Starting from a collection of microbial species abundance counts, the Deep Learning model first trains itself to classify the selected distinct habitats. It then identifies indicator species associated with the habitats. The results are then compared and contrasted with those obtained by traditional statistical techniques. The indicator species are similar when compared at top taxonomic levels such as Domain and Phylum, despite visible differences in lower levels such as Class and Order. More importantly, when our estimated indicators are used to predict final habitat labels using simpler models (such as Support Vector Machines and traditional Artificial Neural Networks), the prediction accuracy is improved. Overall, this study serves as a preliminary step that bridges modern, black-box Machine Learning models with traditional, domain expertise-rich techniques.


Sign in / Sign up

Export Citation Format

Share Document