Neuromorphic Adiabatic Quantum Computation

2009 ◽  
pp. 352-375
Author(s):  
Shigeo Sato ◽  
Mitsunaga Kinjo

The advantage of quantum mechanical dynamics in information processing has attracted much interest, and dedicated studies on quantum computation algorithms indicate that a quantum computer has remarkable computational power in certain tasks. Quantum properties such as quantum superposition and quantum tunneling are worth studying because they may overcome the weakness of gradient descent method in classical neural networks. Also, the technique established for neural networks can be useful for developing a quantum algorithm. In this chapter, first the authors show the effectiveness of incorporating quantum dynamics and then propose neuromorphic adiabatic quantum computation algorithm based on the adiabatic change of Hamiltonian. The proposed method can be viewed as one of complex-valued neural networks because a qubit operates like a neuron. Next, the performance of the proposed algorithm is studied by applying it to a combinatorial optimization problem. Finally, they discuss the learning ability and hardware implementation.

2007 ◽  
Vol 05 (01n02) ◽  
pp. 223-228 ◽  
Author(s):  
ANNALISA MARZUOLI ◽  
MARIO RASETTI

We resort to considerations based on topological quantum field theory to outline the development of a possible quantum algorithm for the evaluation of the permanent of a 0 - 1 matrix. Such an algorithm might represent a breakthrough for quantum computation, since computing the permanent is considered a "universal problem", namely, one among the hardest problems that a quantum computer can efficiently handle.


2014 ◽  
pp. 99-106
Author(s):  
Leonid Makhnist ◽  
Nikolaj Maniakov ◽  
Nikolaj Maniakov

Is proposed two new techniques for multilayer neural networks training. Its basic concept is based on the gradient descent method. For every methodic are showed formulas for calculation of the adaptive training steps. Presented matrix algorithmizations for all of these techniques are very helpful in its program realization.


Author(s):  
Stefan Balluff ◽  
Jörg Bendfeld ◽  
Stefan Krauter

Gathering knowledge not only of the current but also the upcoming wind speed is getting more and more important as the experience of operating and maintaining wind turbines is increasing. Not only with regards to operation and maintenance tasks such as gearbox and generator checks but moreover due to the fact that energy providers have to sell the right amount of their converted energy at the European energy markets, the knowledge of the wind and hence electrical power of the next day is of key importance. Selling more energy as has been offered is penalized as well as offering less energy as contractually promised. In addition to that the price per offered kWh decreases in case of a surplus of energy. Achieving a forecast there are various methods in computer science: fuzzy logic, linear prediction or neural networks. This paper presents current results of wind speed forecasts using recurrent neural networks (RNN) and the gradient descent method plus a backpropagation learning algorithm. Data used has been extracted from NASA's Modern Era-Retrospective analysis for Research and Applications (MERRA) which is calculated by a GEOS-5 Earth System Modeling and Data Assimilation system. The presented results show that wind speed data can be forecasted using historical data for training the RNN. Nevertheless, the current set up system lacks robustness and can be improved further with regards to accuracy.


Author(s):  
WANG XIANGDONG ◽  
WANG SHOUJUE

In this paper, we present a neural-based manufacturing process control system for semiconductor factories to improve the die yield. A model based on neural networks is proposed to simulate Very Large-Scale Integrated (VLSI) manufacturing process. Learning from the historical processing lists with Radial Basis Function (RBF), we simulate the functional relationship between the wafer probing parameters and the die yield. Then we use a gradient-descent method to search a set of 'optimal' parameters that lead to the maximum yield of the model. At last, we adjust the specification in the practical semiconductor manufacturing process. The average die yield increased from 51.7% to 57.5% after the system had been applied in Huajing Corporation.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1273
Author(s):  
Ivan Horváth ◽  
Robert Mendris

Quantum physics frequently involves a need to count the states, subspaces, measurement outcomes, and other elements of quantum dynamics. However, with quantum mechanics assigning probabilities to such objects, it is often desirable to work with the notion of a “total” that takes into account their varied relevance. For example, such an effective count of position states available to a lattice electron could characterize its localization properties. Similarly, the effective total of outcomes in the measurement step of a quantum computation relates to the efficiency of the quantum algorithm. Despite a broad need for effective counting, a well-founded prescription has not been formulated. Instead, the assignments that do not respect the measure-like nature of the concept, such as versions of the participation number or exponentiated entropies, are used in some areas. Here, we develop the additive theory of effective number functions (ENFs), namely functions assigning consistent totals to collections of objects endowed with probability weights. Our analysis reveals the existence of a minimal total, realized by the unique ENF, which leads to effective counting with absolute meaning. Touching upon the nature of the measure, our results may find applications not only in quantum physics, but also in other quantitative sciences.


2019 ◽  
Vol 9 (21) ◽  
pp. 4568
Author(s):  
Hyeyoung Park ◽  
Kwanyong Lee

Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods.


Author(s):  
Arnošt Veselý

This chapter deals with applications of artificial neural networks in classification and regression problems. Based on theoretical analysis it demonstrates that in classification problems one should use cross-entropy error function rather than the usual sum-of-square error function. Using gradient descent method for finding the minimum of the cross entropy error function, leads to the well-known backpropagation of error scheme of gradient calculation if at the output layer of the neural network the neurons with logistic or softmax output functions are used. The author believes that understanding the underlying theory presented in this chapter will help researchers in medical informatics to choose more suitable network architectures for medical applications and that it helps them to carry out the network training more effectively.


1998 ◽  
Vol 35 (02) ◽  
pp. 395-406 ◽  
Author(s):  
Jürgen Dippon

A stochastic gradient descent method is combined with a consistent auxiliary estimate to achieve global convergence of the recursion. Using step lengths converging to zero slower than 1/n and averaging the trajectories, yields the optimal convergence rate of 1/√n and the optimal variance of the asymptotic distribution. Possible applications can be found in maximum likelihood estimation, regression analysis, training of artificial neural networks, and stochastic optimization.


Sign in / Sign up

Export Citation Format

Share Document