scholarly journals Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Xiaoming Xu ◽  
Chenglin Wen

In traditional principle component analysis (PCA), because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs) often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Zihan Wang ◽  
Chenglin Wen ◽  
Xiaoming Xu ◽  
Siyu Ji

Principal component analysis (PCA) is widely used in fault diagnosis. Because the traditional data preprocessing method ignores the correlation between different variables in the system, the feature extraction is not accurate. In order to solve it, this paper proposes a kind of data preprocessing method based on the Gap metric to improve the performance of PCA in fault diagnosis. For different types of faults, the original dataset transformation through Gap metric can reflect the correlation of different variables of the system in high-dimensional space, so as to model more accurately. Finally, the feasibility and effectiveness of the proposed method are verified through simulation.


2003 ◽  
Vol 1 (2-3) ◽  
pp. 151-156 ◽  
Author(s):  
R. L Sapra ◽  
S. K. Lal

AbstractWe suggest a diversity-dependent strategy, based on Principle Component Analysis, for selecting distinct accessions/parents for breeding from a soybean germplasm collection comprising of 463 lines, characterized and evaluated for 10 qualitative and eight quantitative traits. A sample size of six accessions included all the three states, namely low, medium and high of the individual quantitative traits, while a sample of 16–19 accessions included all the 60–64 distinct states of qualitative as well as quantitative traits. Under certain assumptions, the paper also develops an expression for estimating the size of a target population for capturing maximum variability in a sample three accessions.


2021 ◽  
Vol 23 (06) ◽  
pp. 1699-1715
Author(s):  
Mohamed, A. M. ◽  
◽  
Abdel Latif, S. H ◽  
Alwan, A. S. ◽  
◽  
...  

The principle component analysis is used more frequently as a variables reduction technique. And recently, an evolving group of studies makes use of machine learning regression algorithms to improve the estimation of empirical models. One of the most frequently used machines learning regression models is support vector regression with various kernel functions. However, an ensemble of support vector regression and principal component analysis is also possible. So, this paper aims to investigate the competence of support vector regression techniques after performing principal component analysis to explore the possibility of reducing data and having more accurate estimations. Some new proposals are introduced and the behavior of two different models 𝜀𝜀-SVR and 𝑣𝑣-SVR are compared through an extensive simulation study under four different kernel functions; linear, radial, polynomial, and sigmoid kernel functions, with different sample sizes, ranges from small, moderate to large. The models are compared with their counterparts in terms of coefficient of determination (𝑅𝑅2 ) and root mean squared error (RMSE). The comparative results show that applying SVR after PCA models improve the results in terms of SV numbers between 30% and 60% on average and it can be applied with real data. In addition, the linear kernel function gave the best values rather than other kernel functions and the sigmoid kernel gave the worst values. Under 𝜀𝜀-SVR the results improved which did not happen with 𝑣𝑣-SVR. It is also drawn that, RMSE values decreased with increasing sample size.


2018 ◽  
Vol 17 (04) ◽  
pp. 1850029
Author(s):  
Mohammad Seidpisheh ◽  
Adel Mohammadpour

We consider the principal component analysis (PCA) for the heavy-tailed distributions. A traditional measure for the classical PCA is the covariance measure. Due to the non-existence of variance of many heavy-tailed distributions, this measure cannot be used for them. We will clarify how to perform PCA in heavy-tailed data by extending a similarity measure based on covariance. We introduce similarity measures based on a new dependence coefficient of heavy-tailed distributions. Using real and artificial datasets, the performance of the proposed PCA is evaluated and compared with the classical one.


Sign in / Sign up

Export Citation Format

Share Document