Decorrelated Hebbian Learning for Clustering and Function Approximation

1995 ◽  
Vol 7 (2) ◽  
pp. 338-348 ◽  
Author(s):  
G. Deco ◽  
D. Obradovic

This paper presents a new learning paradigm that consists of a Hebbian and anti-Hebbian learning. A layer of radial basis functions is adapted in an unsupervised fashion by minimizing a two-element cost function. The first element maximizes the output of each gaussian neuron and it can be seen as an implementation of the traditional Hebbian learning law. The second element of the cost function reinforces the competitive learning by penalizing the correlation between the nodes. Consequently, the second term has an “anti-Hebbian” effect that is learned by the gaussian neurons without the implementation of lateral inhibition synapses. Therefore, the decorrelated Hebbian learning (DHL) performs clustering in the input space avoiding the “nonbiological” winner-take-all rule. In addition to the standard clustering problem, this paper also presents an application of the DHL in function approximation. A scaled piece-wise linear approximation of a function is obtained in the supervised fashion within the local regions of its domain determined by the DHL. For comparison, a standard single hidden-layer gaussian network is optimized with the initial centers corresponding to the DHL. The efficiency of the algorithm is demonstrated on the chaotic Mackey-Glass time series.

1992 ◽  
Vol 03 (04) ◽  
pp. 323-350 ◽  
Author(s):  
JOYDEEP GHOSH ◽  
YOAN SHIN

This paper introduces a class of higher-order networks called pi-sigma networks (PSNs). PSNs are feedforward networks with a single “hidden” layer of linear summing units and with product units in the output layer. A PSN uses these product units to indirectly incorporate the capabilities of higher-order networks while greatly reducing network complexity. PSNs have only one layer of adjustable weights and exhibit fast learning. A PSN with K summing units provides a constrained Kth order approximation of a continuous function. A generalization of the PSN is presented that can uniformly approximate any continuous function defined on a compact set. The use of linear hidden units makes it possible to mathematically study the convergence properties of various LMS type learning algorithms for PSNs. We show that it is desirable to update only a partial set of weights at a time rather than synchronously updating all the weights. Bounds for learning rates which guarantee convergence are derived. Several simulation results on pattern classification and function approximation problems highlight the capabilities of the PSN. Extensive comparisons are made with other higher order networks and with multilayered perceptrons. The neurobiological plausibility of PSN type networks is also discussed.


2013 ◽  
Vol 457-458 ◽  
pp. 1102-1106
Author(s):  
Hao Teng ◽  
Shu Hui Liu ◽  
Yue Hui Chen

The FlexibleNeural Tree uses a tree structure coding and has excellent predictiveability and function approximation capabilities. Due to it, a quantum neural tree model ispresented based on the multi-level transfer function quantum neuralnetwork and Flexible Neural Tree. In the new model, based on the structure of FlexibleNeural Tree, the transfer function of hidden layer quantum neurons is insteadof multiple superposition oftraditional transfer function, makes the model has a kind of inherent ambiguity.This paper used the improved neural tree asprediction model, particle swarm optimization to optimize the parameters of neuraltree, used probabilistic incremental program evolution to optimizethe structure of neural tree. The experiment result for stock index predictionshows the now method can improve the predictive accuracy rate


2021 ◽  
Vol 11 (2) ◽  
pp. 850
Author(s):  
Dokkyun Yi ◽  
Sangmin Ji ◽  
Jieun Park

Artificial intelligence (AI) is achieved by optimizing the cost function constructed from learning data. Changing the parameters in the cost function is an AI learning process (or AI learning for convenience). If AI learning is well performed, then the value of the cost function is the global minimum. In order to obtain the well-learned AI learning, the parameter should be no change in the value of the cost function at the global minimum. One useful optimization method is the momentum method; however, the momentum method has difficulty stopping the parameter when the value of the cost function satisfies the global minimum (non-stop problem). The proposed method is based on the momentum method. In order to solve the non-stop problem of the momentum method, we use the value of the cost function to our method. Therefore, as the learning method processes, the mechanism in our method reduces the amount of change in the parameter by the effect of the value of the cost function. We verified the method through proof of convergence and numerical experiments with existing methods to ensure that the learning works well.


2020 ◽  
Vol 18 (02) ◽  
pp. 2050006 ◽  
Author(s):  
Alexsandro Oliveira Alexandrino ◽  
Carla Negri Lintzmayer ◽  
Zanoni Dias

One of the main problems in Computational Biology is to find the evolutionary distance among species. In most approaches, such distance only involves rearrangements, which are mutations that alter large pieces of the species’ genome. When we represent genomes as permutations, the problem of transforming one genome into another is equivalent to the problem of Sorting Permutations by Rearrangement Operations. The traditional approach is to consider that any rearrangement has the same probability to happen, and so, the goal is to find a minimum sequence of operations which sorts the permutation. However, studies have shown that some rearrangements are more likely to happen than others, and so a weighted approach is more realistic. In a weighted approach, the goal is to find a sequence which sorts the permutations, such that the cost of that sequence is minimum. This work introduces a new type of cost function, which is related to the amount of fragmentation caused by a rearrangement. We present some results about the lower and upper bounds for the fragmentation-weighted problems and the relation between the unweighted and the fragmentation-weighted approach. Our main results are 2-approximation algorithms for five versions of this problem involving reversals and transpositions. We also give bounds for the diameters concerning these problems and provide an improved approximation factor for simple permutations considering transpositions.


2005 ◽  
Vol 133 (6) ◽  
pp. 1710-1726 ◽  
Author(s):  
Milija Zupanski

Abstract A new ensemble-based data assimilation method, named the maximum likelihood ensemble filter (MLEF), is presented. The analysis solution maximizes the likelihood of the posterior probability distribution, obtained by minimization of a cost function that depends on a general nonlinear observation operator. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using a Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculation of the forecast error covariance, the ensembles in MLEF are exploited to efficiently calculate the Hessian preconditioning and the gradient of the cost function. A sufficient number of iterative minimization steps is 2–3, because of superior Hessian preconditioning. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of minimization. The consistent treatment of nonlinear observation operators through optimization is an advantage of the MLEF over other ensemble data assimilation algorithms. The cost of MLEF is comparable to the cost of existing ensemble Kalman filter algorithms. The method is directly applicable to most complex forecast models and observation operators. In this paper, the MLEF method is applied to data assimilation with the one-dimensional Korteweg–de Vries–Burgers equation. The tested observation operator is quadratic, in order to make the assimilation problem more challenging. The results illustrate the stability of the MLEF performance, as well as the benefit of the cost function minimization. The improvement is noted in terms of the rms error, as well as the analysis error covariance. The statistics of innovation vectors (observation minus forecast) also indicate a stable performance of the MLEF algorithm. Additional experiments suggest the amplified benefit of targeted observations in ensemble data assimilation.


2006 ◽  
Vol 290 (2) ◽  
pp. H894-H903 ◽  
Author(s):  
Ghassan S. Kassab

The branching pattern and vascular geometry of biological tree structure are complex. Here we show that the design of all vascular trees for which there exist morphometric data in the literature (e.g., coronary, pulmonary; vessels of various skeletal muscles, mesentery, omentum, and conjunctiva) obeys a set of scaling laws that are based on the hypothesis that the cost of construction of the tree structure and operation of fluid conduction is minimized. The laws consist of scaling relationships between 1) length and vascular volume of the tree, 2) lumen diameter and blood flow rate in each branch, and 3) diameter and length of vessel branches. The exponent of the diameter-flow rate relation is not necessarily equal to 3.0 as required by Murray's law but depends on the ratio of metabolic to viscous power dissipation of the tree of interest. The major significance of the present analysis is to show that the design of various vascular trees of different organs and species can be deduced on the basis of the minimum energy hypothesis and conservation of energy under steady-state conditions. The present study reveals the similarity of nature's scaling laws that dictate the design of various vascular trees and the underlying physical and physiological principles.


2000 ◽  
Vol 25 (2) ◽  
pp. 209-227 ◽  
Author(s):  
Keith R. McLaren ◽  
Peter D. Rossitter ◽  
Alan A. Powell

2021 ◽  
pp. 107754632110324
Author(s):  
Berk Altıner ◽  
Bilal Erol ◽  
Akın Delibaşı

Adaptive optics systems are powerful tools that are implemented to degrade the effects of wavefront aberrations. In this article, the optimal actuator placement problem is addressed for the improvement of disturbance attenuation capability of adaptive optics systems due to the fact that actuator placement is directly related to the enhancement of system performance. For this purpose, the linear-quadratic cost function is chosen, so that optimized actuator layouts can be specialized according to the type of wavefront aberrations. It is then considered as a convex optimization problem, and the cost function is formulated for the disturbance attenuation case. The success of the presented method is demonstrated by simulation results.


2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.


Sign in / Sign up

Export Citation Format

Share Document