Gender Recognition from Facial Images using Hybrid Classical-Quantum Neural Network

Author(s):  
Shubham Mittal ◽  
Soven K. Dana
2010 ◽  
Vol 20-23 ◽  
pp. 612-617 ◽  
Author(s):  
Wei Sun ◽  
Yu Jun He ◽  
Ming Meng

The paper presents a novel quantum neural network (QNN) model with variable selection for short term load forecasting. In the proposed QNN model, first, the combiniation of maximum conditonal entropy theory and principal component analysis method is used to select main influential factors with maximum correlation degree to power load index, thus getting effective input variables set. Then the quantum neural network forecating model is constructed. The proposed QNN forecastig model is tested for certain province load data. The experiments and the performance with QNN neural network model are given, and the results showed the method could provide a satisfactory improvement of the forecasting accuracy compared with traditional BP network model.


2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
Sayantan Choudhury ◽  
Ankan Dutta ◽  
Debisree Ray

Abstract In this work, our prime objective is to study the phenomena of quantum chaos and complexity in the machine learning dynamics of Quantum Neural Network (QNN). A Parameterized Quantum Circuits (PQCs) in the hybrid quantum-classical framework is introduced as a universal function approximator to perform optimization with Stochastic Gradient Descent (SGD). We employ a statistical and differential geometric approach to study the learning theory of QNN. The evolution of parametrized unitary operators is correlated with the trajectory of parameters in the Diffusion metric. We establish the parametrized version of Quantum Complexity and Quantum Chaos in terms of physically relevant quantities, which are not only essential in determining the stability, but also essential in providing a very significant lower bound to the generalization capability of QNN. We explicitly prove that when the system executes limit cycles or oscillations in the phase space, the generalization capability of QNN is maximized. Finally, we have determined the generalization capability bound on the variance of parameters of the QNN in a steady state condition using Cauchy Schwartz Inequality.


2018 ◽  
Vol 16 (08) ◽  
pp. 1840001
Author(s):  
Johannes Bausch

The goal of this work is to define a notion of a “quantum neural network” to classify data, which exploits the low-energy spectrum of a local Hamiltonian. As a concrete application, we build a binary classifier, train it on some actual data and then test its performance on a simple classification task. More specifically, we use Microsoft’s quantum simulator, LIQ[Formula: see text][Formula: see text], to construct local Hamiltonians that can encode trained classifier functions in their ground space, and which can be probed by measuring the overlap with test states corresponding to the data to be classified. To obtain such a classifier Hamiltonian, we further propose a training scheme based on quantum annealing which is completely closed-off to the environment and which does not depend on external measurements until the very end, avoiding unnecessary decoherence during the annealing procedure. For a network of size [Formula: see text], the trained network can be stored as a list of [Formula: see text] coupling strengths. We address the question of which interactions are most suitable for a given classification task, and develop a qubit-saving optimization for the training procedure on a simulated annealing device. Furthermore, a small neural network to classify colors into red versus blue is trained and tested, and benchmarked against the annealing parameters.


2019 ◽  
Author(s):  
Elizabeth Behrman ◽  
Nam Nguyen ◽  
James Steck

<p>Noise and decoherence are two major obstacles to the implementation of large-scale quantum computing. Because of the no-cloning theorem, which says we cannot make an exact copy of an arbitrary quantum state, simple redundancy will not work in a quantum context, and unwanted interactions with the environment can destroy coherence and thus the quantum nature of the computation. Because of the parallel and distributed nature of classical neural networks, they have long been successfully used to deal with incomplete or damaged data. In this work, we show that our model of a quantum neural network (QNN) is similarly robust to noise, and that, in addition, it is robust to decoherence. Moreover, robustness to noise and decoherence is not only maintained but improved as the size of the system is increased. Noise and decoherence may even be of advantage in training, as it helps correct for overfitting. We demonstrate the robustness using entanglement as a means for pattern storage in a qubit array. Our results provide evidence that machine learning approaches can obviate otherwise recalcitrant problems in quantum computing. </p> <p> </p>


2021 ◽  
Author(s):  
Ghazaala Yasmin ◽  
ASIT KUMAR DAS ◽  
Janmenjoy Nayak ◽  
S Vimal ◽  
Soumi Dutta

Abstract Speech is one of the most delicate medium through which gender of the speakers can easily be identified. Though the related research has shown very good progress in machine learning but recently, deep learning has imparted a very good research area to explore the deficiency of gender discrimination using traditional machine learning techniques. In deep learning techniques, the speech features are automatically generated by the reinforcement learning from the raw data which have more discriminating power than the human generated features. But in some practical situations like gender recognition, it is observed that combination of both types of features sometimes provides comparatively better performance. In the proposed work, we have initially extracted and selected some informative and precise acoustic features relevant to gender recognition using entropy based information theory and Rough Set Theory (RST). Next, the audio speech signals are directly fed into the deep neural network model consists of Convolution Neural Network (CNN) and Gated Recurrent Unit network (GRUN) for extracting features useful for gender recognition. The RST selects precise and informative features, CNN extracts the locally encoded important features, and GRUN reduces the vanishing gradient and exploding gradient problems. Finally, a hybrid gender recognition system is developed combining both generated feature vectors. The developed model has been tested with five bench mark and a simulated dataset to evaluate its performance and it is observed that combined feature vector provides more effective gender recognition system specially when transgender is considered as a gender type together with male and female.


Author(s):  
Fadhlan Hafizhelmi Kamaru Zaman

Gender classification demonstrates high accuracy in many previous works. However, it does not generalize very well in unconstrained settings and environments. Furthermore, many proposed Convolutional Neural Network (CNN) based solutions vary significantly in their characteristics and architectures, which calls for optimal CNN architecture for this specific task. In this work, a hand-crafted, custom CNN architecture is proposed to distinguish between male and female facial images. This custom CNN requires smaller input image resolutions and significantly fewer trainable parameters than some popular state-of-the-arts such as GoogleNet and AlexNet. It also employs batch normalization layers which results in better computation efficiency. Based on experiments using publicly available datasets such as LFW, CelebA and IMDB-WIKI datasets, the proposed custom CNN delivered the fastest inference time in all tests, where it needs only 0.92ms to classify 1200 images on GPU, 1.79ms on CPU, and 2.51ms on VPU. The custom CNN also delivers performance on-par with state-of-the-arts and even surpassed these methods in CelebA gender classification where it delivered the best result at 96% accuracy. Moreover, in a more challenging cross-dataset inference, custom CNN trained using CelebA dataset gives the best gender classification accuracy for tests on IMDB and WIKI datasets at 97% and 96% accuracy respectively.


Sign in / Sign up

Export Citation Format

Share Document