Complex-Valued Neural Networks
Latest Publications


TOTAL DOCUMENTS

16
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By IGI Global

9781605662145, 9781605662152

Author(s):  
Teijiro Isokawa ◽  
Nobuyuki Matsui ◽  
Haruhiko Nishimura

Quaternions are a class of hypercomplex number systems, a four-dimensional extension of imaginary numbers, which are extensively used in various fields such as modern physics and computer graphics. Although the number of applications of neural networks employing quaternions is comparatively less than that of complex-valued neural networks, it has been increasing recently. In this chapter, the authors describe two types of quaternionic neural network models. One type is a multilayer perceptron based on 3D geometrical affine transformations by quaternions. The operations that can be performed in this network are translation, dilatation, and spatial rotation in three-dimensional space. Several examples are provided in order to demonstrate the utility of this network. The other type is a Hopfield-type recurrent network whose parameters are directly encoded into quaternions. The stability of this network is demonstrated by proving that the energy decreases monotonically with respect to the change in neuron states. The fundamental properties of this network are presented through the network with three neurons.


2009 ◽  
pp. 376-410
Author(s):  
G.G. Rigatos ◽  
S.G. Tzafestas

Neural computation based on principles of quantum mechanics can provide improved models of memory processes and brain functioning and is of primary importance for the realization of quantum computing machines. To this end, this chapter studies neural structures with weights that follow the model of the quantum harmonic oscillator. The proposed neural networks have stochastic weights which are calculated from the solution of Schrödingers equation under the assumption of a parabolic (harmonic) potential. These weights correspond to diffusing particles, which interact with each other as the theory of Brownian motion (Wiener process) predicts. The learning of the stochastic weights (convergence of the diffusing particles to an equilibrium) is analyzed. In the case of associative memories the proposed neural model results in an exponential increase of patterns storage capacity (number of attractors). It is also shown that conventional neural networks and learning algorithms based on error gradient can be conceived as a subset of the proposed quantum neural structures. Thus, the complementarity between classical and quantum physics is also validated in the field of neural computation.


2009 ◽  
pp. 325-351 ◽  
Author(s):  
Nobuyuki Matsui ◽  
Haruhiko Nishimura ◽  
Teijiro Isokawa

Recently, quantum neural networks have been explored as one of the candidates for improving the computational efficiency of neural networks. In this chapter, after giving a brief review of quantum computing, the authors introduce our qubit neural network, which is a multi-layered neural network composed of quantum bit neurons. In this description, it is indispensable to use the complex-valued representation, which is based on the concept of quantum bit (qubit). By means of the simulations in solving the parity check problems as a bench mark examination, we show that the computational power of the qubit neural network is superior to that of the conventional complex-valued and real-valued neural networks. Furthermore, the authors explore its applications such as image processing and pattern recognition. Thus they clarify that this model outperforms the conventional neural networks.


2009 ◽  
pp. 104-122 ◽  
Author(s):  
Mitsuo Yoshida ◽  
Takehiro Mori

Global stability analysis for complex-valued artificial recurrent neural networks seems to be one of yet-unchallenged topics in information science. This chapter presents global stability conditions for discrete-time and continuous- time complex-valued recurrent neural networks, which are regarded as nonlinear dynamical systems. Global asymptotic stability conditions for these networks are derived by way of suitable choices of activation functions. According to these stability conditions, there are classes of discrete-time and continuous-time complex-valued recurrent neural networks whose equilibrium point is globally asymptotically stable. Furthermore, the conditions are shown to be successfully applicable to solving convex programming problems, for which real field solution methods are generally tedious.


Author(s):  
Takehiko Ogawa

Network inversion solves inverse problems to estimate cause from result using a multilayer neural network. The original network inversion has been applied to usual multilayer neural networks with real-valued inputs and outputs. The solution by a neural network with complex-valued inputs and outputs is necessary for general inverse problems with complex numbers. In this chapter, we introduce the complex-valued network inversion method to solve inverse problems with complex numbers. In general, difficulties attributable to the ill-posedness of inverse problems appear. Regularization is used to solve this ill-posedness by adding some conditions to the solution. In this chapter, we also explain regularization for complex-valued network inversion.


2009 ◽  
pp. 284-323 ◽  
Author(s):  
Michele Scarpiniti ◽  
Daniele Vigliano ◽  
Raffaele Parisi ◽  
Aurelio Uncini

This chapter aims at introducing an Independent Component Analysis (ICA) approach to the separation of linear and nonlinear mixtures in complex domain. Source separation is performed by an extension of the INFOMAX approach to the complex environment. The neural network approach is based on an adaptive activation function, whose shape is properly modified during learning. Different models have been used to realize complex nonlinear functions for the linear and the nonlinear environment. In nonlinear environment the nonlinear functions involved during the learning are implemented by the so-called splitting functions, working on the real and the imaginary part of the signal. In linear environment instead, the generalized splitting function which performs a more complete representation of complex function is used. Moreover a simple adaptation algorithm is derived and several experimental results are shown to demonstrate the effectiveness of the proposed method.


2009 ◽  
pp. 256-283 ◽  
Author(s):  
Naoyuki Morita

The author proposes an automatic estimation method for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations. In the method, the problem of NMR spectrum estimation is transformed into the estimation of the parameters of a mathematical model of the NMR signal. To estimate these parameters, Morita designed a complex- valued Hopfield neural network, noting that NMR signals are essentially complex-valued. In addition, we devised a technique called sequential extension of section (SES) that takes into account the decay state of the NMR signal. Morita evaluated the performance of his method using simulations and shows that the estimation precision on the spectrum improves when SES is used in combination the neural network, and that SES has an ability to avoid the local minimum solution on Hopfield neural networks.


2009 ◽  
pp. 236-255 ◽  
Author(s):  
Donq-Liang Lee

New design methods for the complex-valued multistate Hopfield associative memories (CVHAMs) are presented. The author of this chapter shows that the well-known projection rule can be generalized to complex domain such that the weight matrix of the CVHAM can be designed by using the generalized inverse technique. The stability of the presented CVHAM is analyzed by using energy function approach which shows that in synchronous update mode a CVHAM is guaranteed to converge to a fixed point from any given initial state. Moreover, the projection geometry of the generalized projection rule is discussed. In order to enhance the recall capability, a strategy of eliminating the spurious memories is reported. Next, a generalized intraconnected bidirectional associative memory (GIBAM) is introduced. A GIBAM is a complex generalization of the intraconnected BAM (IBAM). Lee shows that the design of the GIBAM can also be accomplished by using the generalized inverse technique. Finally, the validity and the performance of the introduced methods are investigated by computer simulation.


Author(s):  
V. Srinivasa Chakravarthy

This chapter describes Complex Hopfield Neural Network (CHNN), a complex-variable version of the Hopfield neural network, which can exist in both fixed point and oscillatory modes. Memories can be stored by a complex version of Hebbs rule. In the fixed-point mode, CHNN is similar to a continuous-time Hopfield network. In the oscillatory mode, when multiple patterns are stored, the network wanders chaotically among patterns. Presence of chaos in this mode is verified by appropriate time series analysis. It is shown that adaptive connections can be used to control chaos and increase memory capacity. Electronic realization of the network in oscillatory dynamics, with fixed and adaptive connections shows an interesting tradeoff between energy expenditure and retrieval performance. It is shown how the intrinsic chaos in CHNN can be used as a mechanism for annealing when the network is used for solving quadratic optimization problems. The networks applicability to chaotic synchronization is described.


2009 ◽  
pp. 352-375
Author(s):  
Shigeo Sato ◽  
Mitsunaga Kinjo

The advantage of quantum mechanical dynamics in information processing has attracted much interest, and dedicated studies on quantum computation algorithms indicate that a quantum computer has remarkable computational power in certain tasks. Quantum properties such as quantum superposition and quantum tunneling are worth studying because they may overcome the weakness of gradient descent method in classical neural networks. Also, the technique established for neural networks can be useful for developing a quantum algorithm. In this chapter, first the authors show the effectiveness of incorporating quantum dynamics and then propose neuromorphic adiabatic quantum computation algorithm based on the adiabatic change of Hamiltonian. The proposed method can be viewed as one of complex-valued neural networks because a qubit operates like a neuron. Next, the performance of the proposed algorithm is studied by applying it to a combinatorial optimization problem. Finally, they discuss the learning ability and hardware implementation.


Sign in / Sign up

Export Citation Format

Share Document