Information Processing with Combination Method for Dependent Evidences Based on Principal Component Analysis

2014 ◽  
Vol 1022 ◽  
pp. 304-310
Author(s):  
Jie Yao ◽  
Tao Hu ◽  
Jian Jun Yang

To meet the requirement that evidences must be independent for evidences combination in D-S evidence theory when the information processing, the dependence among evidences should be eliminated, so a new combination method of dependent evidences based on the Principal Component Analysis (PCA) is presented. The high-dimensional dependent evidences are replaced by the new low-dimensional independent evidences to reduce the dimensions following the guiding rule of PCA, and then the probability under the new evidences is calculated. The new independent evidences are combined with the combination rules of D-S evidence theory. Compared to existed methods, the dependence in initial evidences is eliminated, and the number of evidences is reduced, which leads to the simplification of the process of evidence combination. Finally, an example is employed to verify the feasibility and effectiveness of the proposed approach.

2019 ◽  
Vol 8 (S3) ◽  
pp. 66-71
Author(s):  
T. Sudha ◽  
P. Nagendra Kumar

Data mining is one of the major areas of research. Clustering is one of the main functionalities of datamining. High dimensionality is one of the main issues of clustering and Dimensionality reduction can be used as a solution to this problem. The present work makes a comparative study of dimensionality reduction techniques such as t-distributed stochastic neighbour embedding and probabilistic principal component analysis in the context of clustering. High dimensional data have been reduced to low dimensional data using dimensionality reduction techniques such as t-distributed stochastic neighbour embedding and probabilistic principal component analysis. Cluster analysis has been performed on the high dimensional data as well as the low dimensional data sets obtained through t-distributed stochastic neighbour embedding and Probabilistic principal component analysis with varying number of clusters. Mean squared error; time and space have been considered as parameters for comparison. The results obtained show that time taken to convert the high dimensional data into low dimensional data using probabilistic principal component analysis is higher than the time taken to convert the high dimensional data into low dimensional data using t-distributed stochastic neighbour embedding.The space required by the data set reduced through Probabilistic principal component analysis is less than the storage space required by the data set reduced through t-distributed stochastic neighbour embedding.


Author(s):  
Di Wang ◽  
Jinhui Xu

In this paper, we study the Principal Component Analysis (PCA) problem under the (distributed) non-interactive local differential privacy model. For the low dimensional case, we show the optimal rate for the private minimax risk of the k-dimensional PCA using the squared subspace distance as the measurement. For the high dimensional row sparse case, we first give a lower bound on the private minimax risk, . Then we provide an efficient algorithm to achieve a near optimal upper bound. Experiments on both synthetic and real world datasets confirm the theoretical guarantees of our algorithms.


2020 ◽  
Vol 152 (23) ◽  
pp. 234103
Author(s):  
Bastien Casier ◽  
Stéphane Carniato ◽  
Tsveta Miteva ◽  
Nathalie Capron ◽  
Nicolas Sisourat

2013 ◽  
Vol 303-306 ◽  
pp. 1101-1104 ◽  
Author(s):  
Yong De Hu ◽  
Jing Chang Pan ◽  
Xin Tan

Kernel entropy component analysis (KECA) reveals the original data’s structure by kernel matrix. This structure is related to the Renyi entropy of the data. KECA maintains the invariance of the original data’s structure by keeping the data’s Renyi entropy unchanged. This paper described the original data by several components on the purpose of dimension reduction. Then the KECA was applied in celestial spectra reduction and was compared with Principal Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) by experiments. Experimental results show that the KECA is a good method in high-dimensional data reduction.


2011 ◽  
Vol 341-342 ◽  
pp. 790-797 ◽  
Author(s):  
Zhi Yan Xiang ◽  
Tie Yong Cao ◽  
Peng Zhang ◽  
Tao Zhu ◽  
Jing Feng Pan

In this paper, an object tracking approach is introduced for color video sequences. The approach presents the integration of color distributions and probabilistic principal component analysis (PPCA) into particle filtering framework. Color distributions are robust to partial occlusion, are rotation and scale invariant and are calculated efficiently. Principal Component Analysis (PCA) is used to update the eigenbasis and the mean, which can reflect the appearance changes of the tracked object. And a low dimensional subspace representation of PPCA efficiently adapts to these changes of appearance of the target object. At the same time, a forgetting factor is incorporated into the updating process, which can be used to economize on processing time and enhance the efficiency of object tracking. Computer simulation experiments demonstrate the effectiveness and the robustness of the proposed tracking algorithm when the target object undergoes pose and scale changes, defilade and complex background.


2014 ◽  
Vol 571-572 ◽  
pp. 753-756
Author(s):  
Wei Li Li ◽  
Xiao Qing Yin ◽  
Bin Wang ◽  
Mao Jun Zhang ◽  
Ke Tan

Denoising is an important issue for laser active image. This paper attempted to process laser active image in the low-dimensional sub-space. We adopted the principal component analysis with local pixel grouping (LPG-PCA) denoising method proposed by Zhang [1], and compared it with the conventional denoising method for laser active image, such as wavelet filtering, wavelet soft threshold filtering and median filtering. Experimental results show that the image denoised by LPG-PCA has higher BIQI value than other images, most of the speckle noise can be reduced and the detail structure information is well preserved. The low-dimensional sub-space idea is a new direction for laser active image denoising.


2021 ◽  
pp. 1321-1333
Author(s):  
Ghadeer JM Mahdi ◽  
Bayda A. Kalaf ◽  
Mundher A. Khaleel

In this paper, a new hybridization of supervised principal component analysis (SPCA) and stochastic gradient descent techniques is proposed, and called as SGD-SPCA, for real large datasets that have a small number of samples in high dimensional space. SGD-SPCA is proposed to become an important tool that can be used to diagnose and treat cancer accurately. When we have large datasets that require many parameters, SGD-SPCA is an excellent method, and it can easily update the parameters when a new observation shows up. Two cancer datasets are used, the first is for Leukemia and the second is for small round blue cell tumors. Also, simulation datasets are used to compare principal component analysis (PCA), SPCA, and SGD-SPCA. The results show that SGD-SPCA is more efficient than other existing methods.


Sign in / Sign up

Export Citation Format

Share Document