Principal Component Regression for Mixture Resolution in Control Analysis by UV-Visible Spectrophotometry

1994 ◽  
Vol 48 (1) ◽  
pp. 37-43 ◽  
Author(s):  
M. Blanco ◽  
J. Coello ◽  
H. Iturriaga ◽  
S. Maspoch ◽  
M. Redon

The potential of principal component regression (PCR) for mixture resolution by UV-visible spectrophotometry was assessed. For this purpose, a set of binary mixtures with Gaussian bands was simulated, and the influence of spectral overlap on the precision of quantification was studied. Likewise, the results obtained in the resolution of a mixture of components with extensively overlapped spectra were investigated in terms of spectral noise and the criterion used to select the optimal number of principal components. The model was validated by cross-validation, and the number of significant principal components was determined on the basis of four different criteria. Three types of noise were considered: intrinsic instrumental noise, which was modeled from experimental data provided by an HP 8452A diode array spectrophotometer; constant baseline shifts; and baseline drift. Introducing artificial baseline alterations in some samples of the calibration matrix was found to increase the reliability of the proposed method in routine analysis. The method was applied to the analysis of mixtures of Ti, AI, and Fe by resolving the spectra of their 8-hydroxyquinoline complexes previously extracted into chloroform.

1996 ◽  
Vol 50 (5) ◽  
pp. 576-582 ◽  
Author(s):  
Marcelo Blanco ◽  
Jordi Coello ◽  
Hortensia Iturriaga ◽  
Santiago Maspoch ◽  
Smail Alaoui-Ismaili

It is demonstrated that noise in UV spectral recordings obtained by using a diode array UV-visible spectrophotometer on different days may conform to a defined pattern. Such structured noise leads to the acceptance, as significant, of components containing noise alone, in calibrations by principal component regression (PCR)—which impedes the detection of outliers at the unknown sample prediction stage and considerably diminishes the potential of this methodology for control analyses. As shown in this paper, the effect of the noise structure can be substantially decreased by recording the spectra for the calibration samples on different days. Also, a procedure for distinguishing between correct samples and outliers is proposed. The procedure fits the distribution of the squared residuals of the absorbances for the calibration samples to an exponential function and uses a 99.9% probability as the acceptable limit. It was applied to analysis of ketoprofen and methylparaben mixtures.


Author(s):  
Jihhyeon Yi ◽  
Sungryul Park ◽  
Juah Im ◽  
Seonyeong Jeon ◽  
Gyouhyung Kyung

The purpose of this study was to examine the effects of display curvature and hand length on smartphone usability, which was assessed in terms of grip comfort, immersive feeling, typing performance, and overall satisfaction. A total of 20 younger individuals with the mean (SD) age of 20.8 (2.4) yrs were divided into three hand-size groups (small: 8, medium: 6, large: 6). Two smartphones of the same size were used – one with a flat display and the other with a side-edge curved display. Three tasks (watching video, calling, and texting) were used to evaluate smartphone usability. The smartphones were used in a landscape mode for the first task, and in a portrait mode for the other two. The flat display smartphone provided higher grip comfort during calling (p = 0.008) and texting (p = 0.006) and higher overall satisfaction (p = 0.0002) than the curved display smartphone. The principal component regression (adjusted R2 = 0.49) of overall satisfaction on three principal components comprised of the remaining measures showed that the first principal component on grip comfort was more important than the other two on watching experience and texting performance. It is thus necessary to carefully consider the effect of display curvature on grip comfort when applying curved displays to hand-held devices such as smartphones.


Author(s):  
A. Kallepalli ◽  
A. Kumar ◽  
K. Khoshelham

Hyperspectral data finds applications in the domain of remote sensing. However, with the increase in amounts of information and advantages associated, come the "curse" of dimensionality and additional computational load. The question most often remains as to which subset of the data best represents the information in the imagery. The present work is an attempt to establish entropy, a statistical measure for quantifying uncertainty, as a formidable measure for determining the optimal number of principal components (PCs) for improved identification of land cover classes. Feature extraction from the Airborne Prism EXperiment (APEX) data was achieved utilizing Principal Component Analysis (PCA). However, determination of optimal number of PCs is vital as addition of computational load to the classification algorithm with no significant improvement in accuracy can be avoided. Considering the soft classification approach applied in this work, entropy results are to be analyzed. Comparison of these entropy measures with traditional accuracy assessment of the corresponding „hardened‟ outputs showed results in the affirmative of the objective. The present work concentrates on entropy being utilized for optimal feature extraction for pre-processing before further analysis, rather than the analysis of accuracy obtained from principal component analysis and possibilistic <i>c</i>-means classification. Results show that 7 PCs of the APEX dataset would be the optimal choice, as they show lower entropy and higher accuracy, along with better identification compared to other combinations while utilizing the APEX dataset.


2013 ◽  
Vol 38 (1) ◽  
pp. 39-45
Author(s):  
Peng Song ◽  
Li Zhao ◽  
Yongqiang Bao

Abstract The Gaussian mixture model (GMM) method is popular and efficient for voice conversion (VC), but it is often subject to overfitting. In this paper, the principal component regression (PCR) method is adopted for the spectral mapping between source speech and target speech, and the numbers of principal components are adjusted properly to prevent the overfitting. Then, in order to better model the nonlinear relationships between the source speech and target speech, the kernel principal component regression (KPCR) method is also proposed. Moreover, a KPCR combined with GMM method is further proposed to improve the accuracy of conversion. In addition, the discontinuity and oversmoothing problems of the traditional GMM method are also addressed. On the one hand, in order to solve the discontinuity problem, the adaptive median filter is adopted to smooth the posterior probabilities. On the other hand, the two mixture components with higher posterior probabilities for each frame are chosen for VC to reduce the oversmoothing problem. Finally, the objective and subjective experiments are carried out, and the results demonstrate that the proposed approach shows greatly better performance than the GMM method. In the objective tests, the proposed method shows lower cepstral distances and higher identification rates than the GMM method. While in the subjective tests, the proposed method obtains higher scores of preference and perceptual quality.


Author(s):  
Shuichi Kawano

AbstractPrincipal component regression (PCR) is a two-stage procedure: the first stage performs principal component analysis (PCA) and the second stage builds a regression model whose explanatory variables are the principal components obtained in the first stage. Since PCA is performed using only explanatory variables, the principal components have no information about the response variable. To address this problem, we present a one-stage procedure for PCR based on a singular value decomposition approach. Our approach is based upon two loss functions, which are a regression loss and a PCA loss from the singular value decomposition, with sparse regularization. The proposed method enables us to obtain principal component loadings that include information about both explanatory variables and a response variable. An estimation algorithm is developed by using the alternating direction method of multipliers. We conduct numerical studies to show the effectiveness of the proposed method.


Author(s):  
Margaretha Ohyver

Principal Component Regression (PCR) is one method to handle multicollinear problems. PCR produces principal components that have a VIF less than ten. The purpose for this research is to obtained PCR model using R software. The result is a model of PCR with two principal components and determination coefficients R(square) = 97,27%.


Sign in / Sign up

Export Citation Format

Share Document