Estimating Functions of Independent Component Analysis for Temporally Correlated Signals

2000 ◽  
Vol 12 (9) ◽  
pp. 2083-2107 ◽  
Author(s):  
Shun-ichi Amari

This article studies a general theory of estimating functions of independent component analysis when the independent source signals are temporarily correlated. Estimating functions are used for deriving both batch and on-line learning algorithms, and they are applicable to blind cases where spatial and temporal probability structures of the sources are unknown. Most algorithms proposed so far can be analyzed in the framework of estimating functions. An admissible class of estimating functions is derived, and related efficient on-line learning algorithms are introduced. We analyze dynamical stability and statistical efficiency of these algorithms. Different from the independently and identically distributed case, the algorithms work even when only the second-order moments are used. The method of simultaneous diagonalization of cross-covariance matrices is also studied from the point of view of estimating functions.

2002 ◽  
Vol 14 (2) ◽  
pp. 421-435 ◽  
Author(s):  
Magnus Rattray

Previous analytical studies of on-line independent component analysis (ICA) learning rules have focused on asymptotic stability and efficiency. In practice, the transient stages of learning are often more significant in determining the success of an algorithm. This is demonstrated here with an analysis of a Hebbian ICA algorithm, which can find a small number of nongaussian components given data composed of a linear mixture of independent source signals. An idealized data model is considered in which the sources comprise a number of nongaussian and gaussian sources, and a solution to the dynamics is obtained in the limit where the number of gaussian sources is infinite. Previous stability results are confirmed by expanding around optimal fixed points, where a closed-form solution to the learning dynamics is obtained. However, stochastic effects are shown to stabilize otherwise unstable suboptimal fixed points. Conditions required to destabilize one such fixed point are obtained for the case of a single nongaussian component, indicating that the initial learning rate η required to escape successfully is very low (η = O (N−2) where N is the datadimension), resulting in very slow learning typically requiring O (N3) iterations. Simulations confirm that this picture holds for a finite system.


1996 ◽  
Vol 07 (06) ◽  
pp. 671-687 ◽  
Author(s):  
AAPO HYVÄRINEN ◽  
ERKKI OJA

Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.


2020 ◽  
Vol 537 ◽  
pp. 425-451 ◽  
Author(s):  
Edwin Lughofer ◽  
Alexandru-Ciprian Zavoianu ◽  
Robert Pollak ◽  
Mahardhika Pratama ◽  
Pauline Meyer-Heye ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document