Adaptive Neural Algorithms for PCA and ICA

Author(s):  
Radu Mutihac

Artificial neural networks (ANNs) (McCulloch & Pitts, 1943) (Haykin, 1999) were developed as models of their biological counterparts aiming to emulate the real neural systems and mimic the structural organization and function of the human brain. Their applications were based on the ability of self-designing to solve a problem by learning the solution from data. A comparative study of neural implementations running principal component analysis (PCA) and independent component analysis (ICA) was carried out. Artificially generated data additively corrupted with white noise in order to enforce randomness were employed to critically evaluate and assess the reliability of data projections. Analysis in both time and frequency domains showed the superiority of the estimated independent components (ICs) relative to principal components (PCs) in faithful retrieval of the genuine (latent) source signals. Neural computation belongs to information processing dealing with adaptive, parallel, and distributed (localized) signal processing. In data analysis, a common task consists in finding an adequate subspace of multivariate data for subsequent processing and interpretation. Linear transforms are frequently employed in data model selection due to their computational and conceptual simplicity. Some common linear transforms are PCA, factor analysis (FA), projection pursuit (PP), and, more recently, ICA (Comon, 1994). The latter emerged as an extension of nonlinear PCA (Hotelling, 1993) and developed in the context of blind source separation (BSS) (Cardoso, 1998) in signal and array processing. ICA is also related to recent theories of the visual brain (Barlow, 1991), which assume that consecutive processing steps lead to a progressive reduction in the redundancy of representation (Olshausen and Field, 1996). This contribution is an overview of the PCA and ICA neuromorphic architectures and their associated algorithmic implementations increasingly used as exploratory techniques. The discussion is conducted on artificially generated sub- and super-Gaussian source signals.

2016 ◽  
Vol 13 (10) ◽  
pp. 7676-7679 ◽  
Author(s):  
Yi Liu

WeChat software is an important social tool in modern society. This paper discusses the network impact of WeChat from ten aspects including WeChat popularity, attention, video observability, network reputation, function usability, dissemination speed of information, transmission ratio of positive energy and impact of WeChat on network economy, politics and culture, and questionnaires on these ten influence factors are distributed to college students for investigation. Principal component analysis is used to deal with the survey results, the principal components of the ten factors are extracted, and the results show that WeChat popularity, attention, video observability, network reputation and function usability are the main components, in which WeChat popularity, attention and video observability are the factors having the greatest impact on the calculation. And this paper presents the function relationship between the main principal components of WeChat network impact index and these ten influence factors, to evaluate the network impact index of WeChat.


2013 ◽  
Vol 397-400 ◽  
pp. 42-46
Author(s):  
Nan Zhao ◽  
Hong Yu Shao

According to the current situations of the unorganized and disorderly design knowledge as well as the weak innovation capability for SMEs under cloud manufacturing environment, and aiming at combining the design knowledge into ordered knowledge resource series, the service ability assessment model of knowledge resource was eventually proposed, and moreover, the Projection Pursuit-Principal Component Analysis (PP-PCA) algorithm for service ability assessment was further designed. The study in this paper would contribute to the realization of the effectiveness and accuracy of the knowledge push service, which exhibited a significant importance for improving the reuse efficiency of knowledge resources and knowledge service satisfaction under the cloud manufacturing environment.


2020 ◽  
Author(s):  
Y-h. Taguchi ◽  
Turki Turki

ABSTRACTIdentifying differentially expressed genes is difficult because of the small number of available samples compared with the large number of genes. Conventional gene selection methods employing statistical tests have the critical problem of heavy dependence of P-values on sample size. Although the recently proposed principal component analysis (PCA) and tensor decomposition (TD)-based unsupervised feature extraction (FE) has often outperformed these statistical test-based methods, the reason why they worked so well is unclear. In this study, we aim to understand this reason in the context of projection pursuit that was proposed a long time ago to solve the problem of dimensions; we can relate the space spanned by singular value vectors with that spanned by the optimal cluster centroids obtained from K-means. Thus, the success of PCA- and TD-based unsupervised FE can be understood by this equivalence. In addition to this, empirical threshold adjusted P-values of 0.01 assuming the null hypothesis that singular value vectors attributed to genes obey the Gaussian distribution empirically corresponds to threshold-adjusted P-values of 0.1 when the null distribution is generated by gene order shuffling. These findings thus rationalize the success of PCA- and TD-based unsupervised FE for the first time.


1993 ◽  
Vol 7 (6) ◽  
pp. 527-541 ◽  
Author(s):  
Yu-Long Xie ◽  
Ji-Hong Wang ◽  
Yi-Zeng Liang ◽  
Li-Xian Sun ◽  
Xin-Hua Song ◽  
...  

BioTechniques ◽  
2013 ◽  
Vol 54 (3) ◽  
Author(s):  
Bobbie-Jo M. Webb-Robertson ◽  
Melissa M. Matzke ◽  
Thomas O. Metz ◽  
Jason E. McDermott ◽  
Hyunjoo Walker ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document