Learning Transformations with Complex-Valued Neurocomputing

Author(s):  
Tohru Nitta

The ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations has been applied to the estimation of optical flows and the generation of fractal images. The complex-valued neural network has the adaptability and the generalization ability as inherent nature. This is the most different point between the ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations and the standard techniques for 2D affine transformations such as the Fourier descriptor. It is important to clarify the properties of complex-valued neural networks in order to accelerate its practical applications more and more. In this paper, first, the generalization ability of the 1-n-1 complex-valued neural network which has learned complicated rotations on a 2D plane is examined experimentally and analytically. Next, the behavior of the 1-n-1 complex-valued neural network that has learned a transformation on the Steiner circles is demonstrated, and the relationship the values of the complex-valued weights after training and a linear transformation related to the Steiner circles is clarified via computer simulations. Furthermore, the relationship the weight values of the 1-n-1 complex-valued neural network learned 2D affine transformations and the learning patterns used is elucidated. These research results make it possible to solve complicated problems more simply and efficiently with 1-n-1 complex-valued neural networks. As a matter of fact, an application of the 1-n-1 type complex-valued neural network to an associative memory is presented.

Author(s):  
Tohru Nitta

The ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations has been applied to the estimation of optical flows and the generation of fractal images. The complex-valued neural network has the adaptability and the generalization ability as inherent nature. This is the most different point between the ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations and the standard techniques for 2D affine transformations such as the Fourier descriptor. It is important to clarify the properties of complex-valued neural networks in order to accelerate their practical applications more and more. In this chapter, the behavior of the 1-n-1 complex-valued neural network that has learned a transformation on the Steiner circles is demonstrated, and the relationship the values of the complex-valued weights after training and a linear transformation related to the Steiner circles is clarified via computer simulations. Furthermore, the relationship the weight values of the 1-n-1 complex-valued neural network learned 2D affine transformations and the learning patterns used is elucidated. These research results make it possible to solve complicated problems more simply and efficiently with 1-n-1 complex-valued neural networks. As a matter of fact, an application of the 1-n-1 type complex-valued neural network to an associative memory is presented.


2004 ◽  
Vol 16 (1) ◽  
pp. 73-97 ◽  
Author(s):  
Tohru Nitta

This letter presents some results of an analysis on the decision boundaries of complex-valued neural networks whose weights, threshold values, input and output signals are all complex numbers. The main results may be summarized as follows. (1) A decision boundary of a single complex-valued neuron consists of two hypersurfaces that intersect orthogonally, and divides a decision region into four equal sections. The XOR problem and the detection of symmetry problem that cannot be solved with two-layered real-valued neural networks, can be solved by two-layered complex-valued neural networks with the orthogonal decision boundaries, which reveals a potent computational power of complex-valued neural nets. Furthermore, the fading equalization problem can be successfully solved by the two-layered complex-valued neural network with the highest generalization ability. (2) A decision boundary of a three-layered complex-valued neural network has the orthogonal property as a basic structure, and its two hypersurfaces approach orthogonality as all the net inputs to each hidden neuron grow. In particular, most of the decision boundaries in the three-layered complex-valued neural network inetersect orthogonally when the network is trained using Complex-BP algorithm. As a result, the orthogonality of the decision boundaries improves its generalization ability. (3) The average of the learning speed of the Complex-BP is several times faster than that of the Real-BP. The standard deviation of the learning speed of the Complex-BP is smaller than that of the Real-BP. It seems that the complex-valued neural network and the related algorithm are natural for learning complex-valued patterns for the above reasons.


2015 ◽  
Vol 713-715 ◽  
pp. 1716-1720
Author(s):  
Dai Yuan Zhang ◽  
Lei Lei Wang

In order to describe the generalization ability, this paper discusses the error analysis of neural network with multiply neurons using rational spline weight functions. We use the cubic numerator polynomial and linear denominator polynomial as the rational splines for weight functions. We derive the error formula for approximation, the results can be used to algorithms for training neural networks.


2019 ◽  
Vol 9 (16) ◽  
pp. 3391 ◽  
Author(s):  
Santiago Pascual ◽  
Joan Serrà ◽  
Antonio Bonafonte

Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.


2012 ◽  
Vol 263-266 ◽  
pp. 3374-3377
Author(s):  
Hua Liang Wu ◽  
Zhen Dong Mu ◽  
Jian Feng Hu

In the application of the classification, neural networks are often used as a classification tool, In this paper, neural network is introduced on motor imagery EEG analysis, the first EEG Hjort conversion, and then the brain electrical signal is converted into the frequency domain, Finally, the fisher distance for feature extraction in the EEG analysis, identification of the study sample was 97 86% recognition rate is 80% of the test sample.


2008 ◽  
Vol 18 (02) ◽  
pp. 123-134 ◽  
Author(s):  
TOHRU NITTA

This paper will prove the uniqueness theorem for 3-layered complex-valued neural networks where the threshold parameters of the hidden neurons can take non-zeros. That is, if a 3-layered complex-valued neural network is irreducible, the 3-layered complex-valued neural network that approximates a given complex-valued function is uniquely determined up to a finite group on the transformations of the learnable parameters of the complex-valued neural network.


2021 ◽  
Vol 12 (4) ◽  
pp. 256
Author(s):  
Yi Wu ◽  
Wei Li

Accurate capacity estimation can ensure the safe and reliable operation of lithium-ion batteries in practical applications. Recently, deep learning-based capacity estimation methods have demonstrated impressive advances. However, such methods suffer from limited labeled data for training, i.e., the capacity ground-truth of lithium-ion batteries. A capacity estimation method is proposed based on a semi-supervised convolutional neural network (SS-CNN). This method can automatically extract features from battery partial-charge information for capacity estimation. Furthermore, a semi-supervised training strategy is developed to take advantage of the extra unlabeled sample, which can improve the generalization of the model and the accuracy of capacity estimation even in the presence of limited labeled data. Compared with artificial neural networks and convolutional neural networks, the proposed method is demonstrated to improve capacity estimation accuracy.


2021 ◽  
pp. 1-15
Author(s):  
Masaki Kobayashi

Abstract A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model. A quaternion-valued Hopfield neural network (QHNN) with a twin-multistate activation function was proposed to reduce the number of weight parameters of CHNN. Dual connections (DCs) are introduced to the QHNNs to improve the noise tolerance. The DCs take advantage of the noncommutativity of quaternions and consist of two weights between neurons. A QHNN with DCs provides much better noise tolerance than a CHNN. Although a CHNN and a QHNN with DCs have the samenumber of weight parameters, the storage capacity of projection rule for QHNNs with DCs is half of that for CHNNs and equals that of conventional QHNNs. The small storage capacity of QHNNs with DCs is caused by projection rule, not the architecture. In this work, the ebbian rule is introduced and proved by stochastic analysis that the storage capacity of a QHNN with DCs is 0.8 times as many as that of a CHNN.


2016 ◽  
Vol 28 (7) ◽  
pp. 851-861 ◽  
Author(s):  
Ziemowit Dworakowski ◽  
Krzysztof Dragan ◽  
Tadeusz Stepinski

Neural networks are commonly recognized tools for the classification of multidimensional data obtained in structural health monitoring (SHM) systems. Their configuration for a given scenario is, however, a challenging task, which limits the possibilities of their practical applications. In this article the authors propose using the neural network ensemble approach for the classification of SHM data generated by guided wave sensor networks. The overproduce and choose strategy is used for designing ensembles containing different types and sizes of neural networks. The proposed method allows for a significant increase of the state assessment reliability, which is illustrated by the results obtained from the practical industrial case of a full-scale aircraft test. The method is verified in the process of detecting fatigue cracks propagating in the aircraft load-carrying structure. The long-term experiments are performed in variable environmental conditions with a net of structure-embedded piezoelectric sensors.


Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 511
Author(s):  
Shuohao Li ◽  
Min Tang ◽  
Jun Zhang ◽  
Lincheng Jiang

Image scene graph is a semantic structural representation which can not only show what objects are in the image, but also infer the relationships and interactions among them. Despite the recent success in object detection using deep neural networks, automatically recognizing social relations of objects in images remains a challenging task due to the significant gap between the domains of visual content and social relation. In this work, we translate the scene graph into an Attentive Gated Graph Neural Network which can propagate a message by visual relationship embedding. More specifically, nodes in gated neural networks can represent objects in the image, and edges can be regarded as relationships among objects. In this network, an attention mechanism is applied to measure the strength of the relationship between objects. It can increase the accuracy of object classification and reduce the complexity of relationship classification. Extensive experiments on the widely adopted Visual Genome Dataset show the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document