scholarly journals JointL1/2-Norm Constraint and Graph-Laplacian PCA Method for Feature Extraction

2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Chun-Mei Feng ◽  
Ying-Lian Gao ◽  
Jin-Xing Liu ◽  
Juan Wang ◽  
Dong-Qin Wang ◽  
...  

Principal Component Analysis (PCA) as a tool for dimensionality reduction is widely used in many areas. In the area of bioinformatics, each involved variable corresponds to a specific gene. In order to improve the robustness of PCA-based method, this paper proposes a novel graph-Laplacian PCA algorithm by adoptingL1/2constraint (L1/2gLPCA) on error function for feature (gene) extraction. The error function based onL1/2-norm helps to reduce the influence of outliers and noise. Augmented Lagrange Multipliers (ALM) method is applied to solve the subproblem. This method gets better results in feature extraction than other state-of-the-art PCA-based methods. Extensive experimental results on simulation data and gene expression data sets demonstrate that our method can get higher identification accuracies than others.

Author(s):  
Jerry Lin ◽  
Rajeev Kumar Pandey ◽  
Paul C.-P. Chao

Abstract This study proposes a reduce AI model for the accurate measurement of the blood pressure (BP). In this study varied temporal periods of photoplethysmography (PPG) waveforms is used as the features for the artificial neural networks to estimate blood pressure. A nonlinear Principal component analysis (PCA) method is used herein to remove the redundant features and determine a set of dominant features which is highly correlated to the Blood pressure (BP). The reduce features-set not only helps to minimize the size of the neural network but also improve the measurement accuracy of the systolic blood pressure (SBP) and diastolic blood pressure (DBP). The designed Neural Network has the 5-input layer, 2 hidden layers (32 nodes each) and 2 output nodes for SBP and DBP, respectively. The NN model is trained by the PPG data sets, acquired from the 96 subjects. The testing regression for the SBP and DBP estimation is obtained as 0.81. The resultant errors for the SBP and DBP measurement are 2.00±6.08 mmHg and 1.87±4.09 mmHg, respectively. According to the Advancement of Medical Instrumentation (AAMI) and British Hypertension Society (BHS) standard, the measured error of ±6.08 mmHg is less than 8 mmHg, which shows that the device performance is in grade “A”.


2019 ◽  
Vol 9 (2) ◽  
pp. 133
Author(s):  
Oky Dwi Nurhayati ◽  
Dania Eridani ◽  
Ajik Ulinuha

Chicken eggs become one of the animal proteins commonly used by people, especially in Indonesia. Eggs have high economic value and have diverse benefits and a high nutritional content. Visually to distinguish between domestic chicken eggs and arabic chicken eggs has many difficulties because physically the shape and color of eggs have similarities. This research was conducted to develop applications that were able to identify the types of domestic chicken eggs and Arab chicken eggs using the Principal Componenet Analysis (PCA) method and first order feature extraction. The application applies digital image processing stages, namely resizing image size, RGB color space conversion to HSV, contrast enhancement, image segmentation using the thresholding method, opening and region filling morphology operations, first order feature extraction and classification using the PCA method. The results of identification of types of native domestic chicken eggs and Arabic chicken eggs using the Principal Component Analysis method showed the results of 95% system accuracy percentage, consisting of 90% accuracy of success in the type of domestic chicken eggs and 100% accuracy of success in the type of Arabic chicken eggs.


2017 ◽  
Vol 14 (1) ◽  
pp. 829-834 ◽  
Author(s):  
Chunwei Tian ◽  
Qi Zhang ◽  
Jian Zhang ◽  
Guanglu Sun ◽  
Yuan Sun

The two-dimensional principal component analysis (2D-PCA) method has been widely applied in fields of image classification, computer vision, signal processing and pattern recognition. The 2D-PCA algorithm also has a satisfactory performance in both theoretical research and real-world applications. It not only retains main information of the original face images, but also decreases the dimension of original face images. In this paper, we integrate the 2D-PCA and spare representation classification (SRC) method to distinguish face images, which has great performance in face recognition. The novel representation of original face image obtained using 2D-PCA is complementary with original face image, so that the fusion of them can obviously improve the accuracy of face recognition. This is also attributed to the fact the features obtained using 2D-PCA are usually more robust than original face image matrices. The experiments of face recognition demonstrate that the combination of original face images and new representations of the original face images is more effective than the only original images. Especially, the simultaneous use of the 2D-PCA method and sparse representation can extremely improve accuracy in image classification. In this paper, the adaptive weighted fusion scheme automatically obtains optimal weights and it has no any parameter. The proposed method is not only simple and easy to achieve, but also obtains high accuracy in face recognition.


2006 ◽  
Vol 06 (01) ◽  
pp. L17-L28 ◽  
Author(s):  
JOSÉ MANUEL LÓPEZ-ALONSO ◽  
JAVIER ALDA

Principal Component Analysis (PCA) has been applied to the characterization of the 1/f-noise. The application of the PCA to the 1/f noise requires the definition of a stochastic multidimensional variable. The components of this variable describe the temporal evolution of the phenomena sampled at regular time intervals. In this paper we analyze the conditions about the number of observations and the dimension of the multidimensional random variable necessary to use the PCA method in a sound manner. We have tested the obtained conditions for simulated and experimental data sets obtained from imaging optical systems. The results can be extended to other fields where this kind of noise is relevant.


Author(s):  
A. Iodice D’Enza ◽  
A. Markos ◽  
F. Palumbo

AbstractStandard multivariate techniques like Principal Component Analysis (PCA) are based on the eigendecomposition of a matrix and therefore require complete data sets. Recent comparative reviews of PCA algorithms for missing data showed the regularised iterative PCA algorithm (RPCA) to be effective. This paper presents two chunk-wise implementations of RPCA suitable for the imputation of “tall” data sets, that is, data sets with many observations. A “chunk” is a subset of the whole set of available observations. In particular, one implementation is suitable for distributed computation as it imputes each chunk independently. The other implementation, instead, is suitable for incremental computation, where the imputation of each new chunk is based on all the chunks analysed that far. The proposed procedures were compared to batch RPCA considering different data sets and missing data mechanisms. Experimental results showed that the distributed approach had similar performance to batch RPCA for data with entries missing completely at random. The incremental approach showed appreciable performance when the data is missing not completely at random, and the first analysed chunks contain sufficient information on the data structure.


Author(s):  
Inca Inca ◽  
Triyogatama Wahyu Widodo ◽  
Danang Lelono

This research aims to classification of samples of green tea and black tea originated from different planting sites,  Tambi and Pagilaran. Samples of green tea and black tea; quality I (BOP), quality II (BP), quality III (Bohea) were each collected from Tambi and Pagilaran to analyze the charasteristic of both sample from both sites. Measurements of tea samples were performed using a dynamic e-nose device based on a MOS gas sensor, with a maximum set point temperature of 40ºC, flushing 300 seconds, collecting 120 seconds, and purging 80 seconds for 10 cycles repeatedly. The resulting sensor response is then processed using the difference method for baseline manipulation. Characteristic of extraction process on the sensor response results is carried out in three methods; relative, fractional change, and integral. Matrix data of the feature extraction results was reduced using the PCA method by mapping the aroma patterns of each sample using 2-PCA components. The PCA reduction results in integral feature extraction showed the largest percentage of cumulative variance in classifying green tea sample data by 97% and black tea by 100%. The large percentage value of cumulative variance indicates PCA can differentiate samples of green tea and black tea from Tambi and Pagilaran well.


2014 ◽  
Vol 644-650 ◽  
pp. 1573-1576
Author(s):  
Hong Jian Zhang ◽  
Ping He ◽  
Chao Liu ◽  
Yuan Guo

This paper presents an improved Local Binary Pattern (LBP) operator for feature extraction f which considers both sign and magnitude information of the local difference of neighborhood and center pixels. The image is first divided into small blocks from which improved LBP histograms are extracted and concatenated into a single feature histogram. Then, the Principal Component Analysis (PCA) method is used to reduce feature dimensions. Finally, the recognition is performed by a nearest-neighbor classifier with Chi square statistic as the dissimilarity measurement. Experiments on AR face image databases by the leave-one-out (LOO) procedure illustrate that this method has higher recognition rate and more robust than the original LBP.


2020 ◽  
Vol 9 (1) ◽  
pp. 18 ◽  
Author(s):  
Siyang Chen ◽  
Yunsheng Zhang ◽  
Ke Nie ◽  
Xiaoming Li ◽  
Weixi Wang

This paper presents an automatic building extraction method which utilizes a photogrammetric digital surface model (DSM) and digital orthophoto map (DOM) with the help of historical digital line graphic (DLG) data. To reduce the need for manual labeling, the initial labels were automatically obtained from historical DLGs. Nonetheless, a proportion of these labels are incorrect due to changes (e.g., new constructions, demolished buildings). To select clean samples, an iterative method using random forest (RF) classifier was proposed in order to remove some possible incorrect labels. To get effective features, deep features extracted from normalized DSM (nDSM) and DOM using the pre-trained fully convolutional networks (FCN) were combined. To control the computation cost and alleviate the burden of redundancy, the principal component analysis (PCA) algorithm was applied to reduce the feature dimensions. Three data sets in two areas were employed with evaluation in two aspects. In these data sets, three DLGs with 15%, 65%, and 25% of noise were applied. The results demonstrate the proposed method could effectively select clean samples, and maintain acceptable quality of extracted results in both pixel-based and object-based evaluations.


Sign in / Sign up

Export Citation Format

Share Document