A Generalized Contrast Function and Stability Analysis for Overdetermined Blind Separation of Instantaneous Mixtures

2006 ◽  
Vol 18 (3) ◽  
pp. 709-728 ◽  
Author(s):  
Xiao-Long Zhu ◽  
Xian-Da Zhang ◽  
Ji-Min Ye

In this letter, the problem of blind separation of n independent sources from their m linear instantaneous mixtures is considered. First, a generalized contrast function is defined as a valuable extension of the existing classical and nonsymmetrical contrast functions. It is applicable to the overdetermined blind separation (m > n) with an unknown number of sources, because not only independent components but also redundant ones are allowed in the outputs of a separation system. Second, a natural gradient learning algorithm developed primarily for the complete case (m = n) is shown to work as well with an n × m or m × m separating matrix, for each optimizes a certain mutual information contrast function. Finally, we present stability analysis for a newly proposed generalized orthogonal natural gradient algorithm (which can perform the overdetermined blind separation when n is unknown), obtaining an expectable result that its local stability conditions are slightly stricter than those of the conventional natural gradient algorithm using an invertible mixing matrix (m = n).

2004 ◽  
Vol 16 (8) ◽  
pp. 1641-1660 ◽  
Author(s):  
Ji-Min Ye ◽  
Xiao-Long Zhu ◽  
Xian-Da Zhang

The blind source separation (BSS) problem with an unknown number of sources is an important practical issue that is usually skipped by assuming that the source number n is known and equal to the number m of sensors. This letter studies the general BSS problem satisfying m ≥ n. First, it is shown that the mutual information of outputs of the separation network is a cost function for BSS, provided that the mixing matrix is of full column rank and the m×m separating matrix is nonsingular. The mutual information reaches its local minima at the separation points, where the m outputs consist of n desired source signals and m−n redundant signals. Second, it is proved that the natural gradient algorithm proposed primarily for complete BSS (m n) can be generalized to deal with the overdetermined BSS problem (m>n), but it would diverge inevitably due to lack of a stationary point. To overcome this shortcoming, we present a modified algorithm, which can perform BSS steadily and provide the desired source signals at specified channels if some matrix is designed properly. Finally, the validity of the proposed algorithm is confirmed by computer simulations on artificially synthesized data.


1997 ◽  
Vol 9 (7) ◽  
pp. 1457-1482 ◽  
Author(s):  
Howard Hua Yang ◽  
Shun-ichi Amari

There are two major approaches for blind separation: maximum entropy (ME) and minimum mutual information (MMI). Both can be implemented by the stochastic gradient descent method for obtaining the demixing matrix. The MI is the contrast function for blind separation; the entropy is not. To justify the ME, the relation between ME and MMI is first elucidated by calculating the first derivative of the entropy and proving that the mean subtraction is necessary in applying the ME and at the solution points determined by the MI, the ME will not update the demixing matrix in the directions of increasing the cross-talking. Second, the natural gradient instead of the ordinary gradient is introduced to obtain efficient algorithms, because the parameter space is a Riemannian space consisting of matrices. The mutual information is calculated by applying the Gram-Charlier expansion to approximate probability density functions of the outputs. Finally, we propose an efficient learning algorithm that incorporates with an adaptive method of estimating the unknown cumulants. It is shown by computer simulation that the convergence of the stochastic descent algorithms is improved by using the natural gradient and the adaptively estimated cumulants.


2008 ◽  
Vol 88 (3) ◽  
pp. 761-766 ◽  
Author(s):  
S. Squartini ◽  
A. Arcangeli ◽  
F. Piazza

1999 ◽  
Vol 11 (8) ◽  
pp. 1875-1883 ◽  
Author(s):  
Shun-ichi Amari

Independent component analysis or blind source separation is a new technique of extracting independent signals from mixtures. It is applicable even when the number of independent sources is unknown and is larger or smaller than the number of observed mixture signals. This article extends the natural gradient learning algorithm to be applicable to these overcomplete and undercomplete cases. Here, the observed signals are assumed to be whitened by preprocessing, so that we use the natural Riemannian gradient in Stiefel manifolds.


2011 ◽  
Vol 2011 ◽  
pp. 1-9 ◽  
Author(s):  
Michael R. Bastian ◽  
Jacob H. Gunther ◽  
Todd K. Moon

Adaptive natural gradient learning avoids singularities in the parameter space of multilayer perceptrons. However, it requires a larger number of additional parameters than ordinary backpropagation in the form of the Fisher information matrix. This paper describes a new approach to natural gradient learning that uses a smaller Fisher information matrix. It also uses a prior distribution on the neural network parameters and an annealed learning rate. While this new approach is computationally simpler, its performance is comparable to that of adaptive natural gradient learning.


Sign in / Sign up

Export Citation Format

Share Document