Complex-Valued Neural Network and Inverse Problems

Author(s):  
Takehiko Ogawa

Network inversion solves inverse problems to estimate cause from result using a multilayer neural network. The original network inversion has been applied to usual multilayer neural networks with real-valued inputs and outputs. The solution by a neural network with complex-valued inputs and outputs is necessary for general inverse problems with complex numbers. In this chapter, we introduce the complex-valued network inversion method to solve inverse problems with complex numbers. In general, difficulties attributable to the ill-posedness of inverse problems appear. Regularization is used to solve this ill-posedness by adding some conditions to the solution. In this chapter, we also explain regularization for complex-valued network inversion.

2005 ◽  
Vol 15 (01n02) ◽  
pp. 129-135 ◽  
Author(s):  
MITSUO YOSHIDA ◽  
YASUAKI KUROE ◽  
TAKEHIRO MORI

Recently models of neural networks that can directly deal with complex numbers, complex-valued neural networks, have been proposed and several studies on their abilities of information processing have been done. Furthermore models of neural networks that can deal with quaternion numbers, which is the extension of complex numbers, have also been proposed. However they are all multilayer quaternion neural networks. This paper proposes models of fully connected recurrent quaternion neural networks, Hopfield-type quaternion neural networks. Since quaternion numbers are non-commutative on multiplication, some different models can be considered. We investigate dynamics of these proposed models from the point of view of the existence of an energy function and derive their conditions for existence.


1996 ◽  
Vol 8 (5) ◽  
pp. 939-949 ◽  
Author(s):  
G. Dündar ◽  
F-C. Hsu ◽  
K. Rose

The problems arising from the use of nonlinear multipliers in multilayer neural network synapse structures are discussed. The errors arising from the neglect of nonlinearities are shown and the effect of training in eliminating these errors is discussed. A method for predicting the final errors resulting from nonlinearities is described. Our approximate results are compared with the results from circuit simulations of an actual multiplier circuit.


2011 ◽  
Vol 90-93 ◽  
pp. 337-341
Author(s):  
Ran Gang Yu ◽  
Yong Tian

This paper propose genetic algorithm combined with neural networks, greatly improving the convergence rate of neural network aim at the disadvantage of the traditional BP neural network inversion method is easy to fall into local minimum and slow convergence.Finally, verified the feasibility and superiority of the above methods through the successful initial ground stress inversion of actual project.


2008 ◽  
Vol 18 (02) ◽  
pp. 123-134 ◽  
Author(s):  
TOHRU NITTA

This paper will prove the uniqueness theorem for 3-layered complex-valued neural networks where the threshold parameters of the hidden neurons can take non-zeros. That is, if a 3-layered complex-valued neural network is irreducible, the 3-layered complex-valued neural network that approximates a given complex-valued function is uniquely determined up to a finite group on the transformations of the learnable parameters of the complex-valued neural network.


2021 ◽  
Vol 7 (9) ◽  
pp. 173
Author(s):  
Eduardo Paluzo-Hidalgo ◽  
Rocio Gonzalez-Diaz ◽  
Miguel A. Gutiérrez-Naranjo ◽  
Jónathan Heras

Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, the refinement toward robustness is optimized by reducing the number of simplices (i.e., nodes) needed. We have shown experimentally that such a refined neural network is equivalent to the original network as a classification tool but requires much less storage.


Author(s):  
Tohru Nitta

The ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations has been applied to the estimation of optical flows and the generation of fractal images. The complex-valued neural network has the adaptability and the generalization ability as inherent nature. This is the most different point between the ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations and the standard techniques for 2D affine transformations such as the Fourier descriptor. It is important to clarify the properties of complex-valued neural networks in order to accelerate its practical applications more and more. In this paper, first, the generalization ability of the 1-n-1 complex-valued neural network which has learned complicated rotations on a 2D plane is examined experimentally and analytically. Next, the behavior of the 1-n-1 complex-valued neural network that has learned a transformation on the Steiner circles is demonstrated, and the relationship the values of the complex-valued weights after training and a linear transformation related to the Steiner circles is clarified via computer simulations. Furthermore, the relationship the weight values of the 1-n-1 complex-valued neural network learned 2D affine transformations and the learning patterns used is elucidated. These research results make it possible to solve complicated problems more simply and efficiently with 1-n-1 complex-valued neural networks. As a matter of fact, an application of the 1-n-1 type complex-valued neural network to an associative memory is presented.


2021 ◽  
pp. 1-15
Author(s):  
Masaki Kobayashi

Abstract A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model. A quaternion-valued Hopfield neural network (QHNN) with a twin-multistate activation function was proposed to reduce the number of weight parameters of CHNN. Dual connections (DCs) are introduced to the QHNNs to improve the noise tolerance. The DCs take advantage of the noncommutativity of quaternions and consist of two weights between neurons. A QHNN with DCs provides much better noise tolerance than a CHNN. Although a CHNN and a QHNN with DCs have the samenumber of weight parameters, the storage capacity of projection rule for QHNNs with DCs is half of that for CHNNs and equals that of conventional QHNNs. The small storage capacity of QHNNs with DCs is caused by projection rule, not the architecture. In this work, the ebbian rule is introduced and proved by stochastic analysis that the storage capacity of a QHNN with DCs is 0.8 times as many as that of a CHNN.


Geophysics ◽  
2022 ◽  
pp. 1-44
Author(s):  
Yuhang Sun ◽  
Yang Liu ◽  
Mi Zhang ◽  
Haoran Zhang

AVO (amplitude variation with offset) inversion and neural networks are widely used to invert elastic parameters. With more constraints from well log data, neural network-based inversion may estimate elastic parameters with greater precision and resolution than traditional AVO inversion, however, neural network approaches necessitate a massive number of reliable training samples. Furthermore, because the lack of low-frequency information in seismic gathers leads to multiple solutions of the inverse problem, both inversions rely heavily on proper low-frequency initial models. To mitigate the dependence of inversions on accurate training samples and initial models, we propose solving inverse problems with the recently developed invertible neural networks (INNs). Unlike conventional neural networks, which address the ambiguous inverse issues directly, INNs learn definite forward modeling and use additional latent variables to increase the uniqueness of solutions. Motivated by the newly developed neural networks, we propose an INN-based AVO inversion method, which can reliably invert low to medium frequency velocities and densities with randomly generated easy-to-access datasets rather than trustworthy training samples or well-prepared initial models. Tests on synthetic and field data show that our method is feasible, anti-noise capable, and practicable.


2004 ◽  
Vol 16 (1) ◽  
pp. 73-97 ◽  
Author(s):  
Tohru Nitta

This letter presents some results of an analysis on the decision boundaries of complex-valued neural networks whose weights, threshold values, input and output signals are all complex numbers. The main results may be summarized as follows. (1) A decision boundary of a single complex-valued neuron consists of two hypersurfaces that intersect orthogonally, and divides a decision region into four equal sections. The XOR problem and the detection of symmetry problem that cannot be solved with two-layered real-valued neural networks, can be solved by two-layered complex-valued neural networks with the orthogonal decision boundaries, which reveals a potent computational power of complex-valued neural nets. Furthermore, the fading equalization problem can be successfully solved by the two-layered complex-valued neural network with the highest generalization ability. (2) A decision boundary of a three-layered complex-valued neural network has the orthogonal property as a basic structure, and its two hypersurfaces approach orthogonality as all the net inputs to each hidden neuron grow. In particular, most of the decision boundaries in the three-layered complex-valued neural network inetersect orthogonally when the network is trained using Complex-BP algorithm. As a result, the orthogonality of the decision boundaries improves its generalization ability. (3) The average of the learning speed of the Complex-BP is several times faster than that of the Real-BP. The standard deviation of the learning speed of the Complex-BP is smaller than that of the Real-BP. It seems that the complex-valued neural network and the related algorithm are natural for learning complex-valued patterns for the above reasons.


Sign in / Sign up

Export Citation Format

Share Document