scholarly journals Investigating the Generalization Ability of Parameterized Quantum Circuits with Hierarchical Structures

2021 ◽  
pp. 11-22
Author(s):  
Runheng Ran ◽  
Haozhen Situ

Quantum computing provides prospects for improving machine learning, which are mainly achieved through two aspects, one is to accelerate the calculation, and the other is to improve the performance of the model. As an important feature of machine learning models, generalization ability characterizes models' ability to predict unknown data. Aiming at the question of whether the quantum machine learning model provides reliable generalization ability, quantum circuits with hierarchical structures are explored to classify classical data as well as quantum state data. We also compare three different derivative-free optimization methods, i.e., Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Constrained Optimization by Linear Approximation (COBYLA) and Powell. Numerical results show that these quantum circuits have good performance in terms of trainability and generalization ability.

2012 ◽  
Vol 215-216 ◽  
pp. 133-137
Author(s):  
Guo Shao Su ◽  
Yan Zhang ◽  
Zhen Xing Wu ◽  
Liu Bin Yan

Covariance matrix adaptation evolution strategy algorithm (CMA-ES) is a newly evolution algorithm. It has become a powerful tool for solving highly nonlinear multi-peak optimization problems. In many real-world optimization problems, the location of multiple optima is often required in a search space. In order to evaluate the solution, thousands of fitness function evaluations are involved that is a time consuming or expensive processes. Therefore, conventional stochastic optimization methods meet a special challenge for a very large number of problem function evaluations. Aiming to overcome the shortcoming of stochastic optimization methods in the high calculation cost, a truss optimal method based on CMA-ES algorithm is proposed and applied to solve the section and shape optimization problems of trusses. The study results show that the method is feasible and has the advantages of high accuracy, high efficiency and easy implementation.


2022 ◽  
Vol 32 (1) ◽  
Author(s):  
ShiJie Wei ◽  
YanHu Chen ◽  
ZengRong Zhou ◽  
GuiLu Long

AbstractQuantum machine learning is one of the most promising applications of quantum computing in the noisy intermediate-scale quantum (NISQ) era. We propose a quantum convolutional neural network(QCNN) inspired by convolutional neural networks (CNN), which greatly reduces the computing complexity compared with its classical counterparts, with O((log2M)6) basic gates and O(m2+e) variational parameters, where M is the input data size, m is the filter mask size, and e is the number of parameters in a Hamiltonian. Our model is robust to certain noise for image recognition tasks and the parameters are independent on the input sizes, making it friendly to near-term quantum devices. We demonstrate QCNN with two explicit examples. First, QCNN is applied to image processing, and numerical simulation of three types of spatial filtering, image smoothing, sharpening, and edge detection is performed. Secondly, we demonstrate QCNN in recognizing image, namely, the recognition of handwritten numbers. Compared with previous work, this machine learning model can provide implementable quantum circuits that accurately corresponds to a specific classical convolutional kernel. It provides an efficient avenue to transform CNN to QCNN directly and opens up the prospect of exploiting quantum power to process information in the era of big data.


2008 ◽  
Vol 31 (5) ◽  
pp. 743-757 ◽  
Author(s):  
K.R. Fowler ◽  
J.P. Reese ◽  
C.E. Kees ◽  
J.E. Dennis ◽  
C.T. Kelley ◽  
...  

Author(s):  
Yi-Qi Hu ◽  
Yang Yu ◽  
Wei-Wei Tu ◽  
Qiang Yang ◽  
Yuqiang Chen ◽  
...  

Automatic machine learning (AutoML) aims at automatically choosing the best configuration for machine learning tasks. However, a configuration evaluation can be very time consuming particularly on learning tasks with large datasets. This limitation usually restrains derivative-free optimization from releasing its full power for a fine configuration search using many evaluations. To alleviate this limitation, in this paper, we propose a derivative-free optimization framework for AutoML using multi-fidelity evaluations. It uses many lowfidelity evaluations on small data subsets and very few highfidelity evaluations on the full dataset. However, the lowfidelity evaluations can be badly biased, and need to be corrected with only a very low cost. We thus propose the Transfer Series Expansion (TSE) that learns the low-fidelity correction predictor efficiently by linearly combining a set of base predictors. The base predictors can be obtained cheaply from down-scaled and experienced tasks. Experimental results on real-world AutoML problems verify that the proposed framework can accelerate derivative-free configuration search significantly by making use of the multi-fidelity evaluations.


SPE Journal ◽  
2019 ◽  
Vol 25 (01) ◽  
pp. 081-104 ◽  
Author(s):  
Yimin Liu ◽  
Louis J. Durlofsky

Summary In this study, we explore using multilevel derivative-free optimization (DFO) for history matching, with model properties described using principal-component-analysis (PCA) -based parameterization techniques. The parameterizations applied in this work are optimization-based PCA (O-PCA) and convolutional-neural-network (CNN) -based PCA (CNN-PCA). The latter, which derives from recent developments in deep learning, is able to accurately represent models characterized by multipoint spatial statistics. Mesh adaptive direct search (MADS), a pattern-search method that parallelizes naturally, is applied for the optimizations required to generate posterior (history-matched) models. The use of PCA-based parameterization considerably reduces the number of variables that must be determined during history matching (because the dimension of the parameterization is much smaller than the number of gridblocks in the model), but the optimization problem can still be computationally demanding. The multilevel strategy introduced here addresses this issue by reducing the number of simulations that must be performed at each MADS iteration. Specifically, the PCA coefficients (which are the optimization variables after parameterization) are determined in groups, at multiple levels, rather than all at once. Numerical results are presented for 2D cases, involving channelized systems (with binary and bimodal permeability distributions) and a deltaic-fan system using O-PCA and CNN-PCA parameterizations. O-PCA is effective when sufficient conditioning (hard) data are available, but it can lead to geomodels that are inconsistent with the training image when these data are scarce or nonexistent. CNN-PCA, by contrast, can provide accurate geomodels that contain realistic features even in the absence of hard data. History-matching results demonstrate that substantial uncertainty reduction is achieved in all cases considered, and that the multilevel strategy is effective in reducing the number of simulations required. It is important to note that the parameterizations discussed here can be used with a wide range of history-matching procedures (including ensemble methods), and that other derivative-free optimization methods can be readily applied within the multilevel framework.


2008 ◽  
Vol 191 (3) ◽  
pp. 855-863 ◽  
Author(s):  
Ö. Uğur ◽  
B. Karasözen ◽  
M. Schäfer ◽  
K. Yapıcı

Sign in / Sign up

Export Citation Format

Share Document