scholarly journals Insights From Principal Component Analysis Applied to Py-GCMS Study of Indian Coals and Their Solvent Extracted Clean Coal Products

Author(s):  
Abyansh Roy ◽  
Heena Dhawan ◽  
Sreedevi Upadhyayula ◽  
Hariprasad Kondamana

Abstract The present work aims at studying five Indian coals and their solvent extracted clean coal products using Py-GCMS analysis and correlating these characterizations with results from theoretical a principal component analysis. The pyrolysis products of the original coals and the super clean coals were classified as mono-, di and tri- aromatics while other prominent products that were obtained included cycloalkanes, n-alkanes and alkenes ranging from C10-C29. The Py-GCMS results for the samples were studied using Principal Component Analysis. Inferences on relative composition of constituent compounds and functional groups and structural insights based on scores and loading plots of the PCA analysis were consistent with the experimental observations. ATR-FTIR studies confirmed the reduced concentration of ash in the super clean coals and the presence of aromatics. The Py-GCMS data and the ATR-FTIR spectra led to the conclusion that the super clean coals behaved similarly for both coking and non-coking coals with high aromatic concentrations as compared to the raw coal. Neyveli lignite super clean coal was found to show some structural similarity with the original coals, whereas, the other super clean coal showed structural similarity among them but not with their original coals indicative of the selective action of the e,N solvent system on the polycondensed aromatic structures in coal.

Author(s):  
Abyansh Roy ◽  
Heena Dhawan ◽  
Sreedevi Upadhyayula ◽  
Hariprasad Kodamana

AbstractThe present work aims at studying five Indian coals and their solvent extracted clean coal products using Py-GCMS analysis and correlating the characterization data using theoretical principal component analysis. The pyrolysis products of the original coals and the super clean coals were classified as mono-, di- and tri-aromatics, while other prominent products that were obtained included cycloalkanes, n-alkanes, and alkenes ranging from C10–C29. The principal component analysis is a dimensionality reduction technique that reduced the number of input variables in the characterization dataset and gave inferences on the relative composition of constituent compounds and functional groups and structural insights based on scores and loading plots which were consistent with the experimental observations. ATR-FTIR studies confirmed the reduced concentration of ash in the super clean coals and the presence of aromatics. The Py-GCMS data and the ATR-FTIR spectra led to the conclusion that the super clean coals behaved similarly for both coking and non-coking coals with high aromatic concentrations as compared to the raw coal. Neyveli lignite super clean coal was found to show some structural similarity with the original coals, whereas the other super clean coals showed structural similarity within themselves but not with their original coal samples confirming the selective action of the e,N solvent in solubilizing the polycondensed aromatic structures in the coal samples.


Author(s):  
Maryam Abedini ◽  
Horriyeh Haddad ◽  
Marzieh Faridi Masouleh ◽  
Asadollah Shahbahrami

This study proposes an image denoising algorithm based on sparse representation and Principal Component Analysis (PCA). The proposed algorithm includes the following steps. First, the noisy image is divided into overlapped [Formula: see text] blocks. Second, the discrete cosine transform is applied as a dictionary for the sparse representation of the vectors created by the overlapped blocks. To calculate the sparse vector, the orthogonal matching pursuit algorithm is used. Then, the dictionary is updated by means of the PCA algorithm to achieve the sparsest representation of vectors. Since the signal energy, unlike the noise energy, is concentrated on a small dataset by transforming into the PCA domain, the signal and noise can be well distinguished. The proposed algorithm was implemented in a MATLAB environment and its performance was evaluated on some standard grayscale images under different levels of standard deviations of white Gaussian noise by means of peak signal-to-noise ratio, structural similarity indexes, and visual effects. The experimental results demonstrate that the proposed denoising algorithm achieves significant improvement compared to dual-tree complex discrete wavelet transform and K-singular value decomposition image denoising methods. It also obtains competitive results with the block-matching and 3D filtering method, which is the current state-of-the-art for image denoising.


2018 ◽  
Vol 15 (3) ◽  
pp. 172988141878311 ◽  
Author(s):  
Nannan Wang ◽  
Wenxuan Shi ◽  
Ci’en Fan ◽  
Lian Zou

Image deblurring is a challenging problem in image processing, which aims to reconstruct an original high-quality image from its blurred measurement caused by various factors, for example, imperfect focusing caused by the imaging system or different depths of scene appearing commonly in our daily photos. Recently, sparse representation whose basic idea is to code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary has shown uplifting results in image deblurring. Based on this and another heart-stirring property called nonlocal self-similarity, some researchers have developed nonlocal sparse regularization models to unify the local sparsity and the nonlocal self-similarity into a variational framework for image deblurring. In such models, the similarity evaluation for searching similar image patches is indispensable and influential in deblurring performance. Though the traditional Euclidean distance is generally a choice as a similarity metric, its application might give rise to inferior performance since it fails to capture the intrinsic structure of image patches. Consequently, in this article, based on structural similarity index and principal component analysis, we propose the nonlocal sparse regularization-based image deblurring with novel similarity criteria called structural similarity distance and principal component analysis-subspace Euclidean distance to improve the accuracy of deblurring. The structural similarity index is commonly used for assessing perceptual image quality, and principal component analysis is pervasively used in pattern recognition and dimensionality reduction. In our comprehensive experiments, the nonlocal sparse regularization-based image deblurring with our novel similarity criteria has achieved higher peak signal-to-noise and favorable consistency with subjective vision perception compared with state-of-the-art deblurring algorithms.


2021 ◽  
pp. 1-16
Author(s):  
G. Rajeswari ◽  
P. Ithaya Rani

Facial occlusions like sunglasses, masks, caps etc. have severe consequences when reconstructing the partially occluded regions of a facial picture. This paper proposes a novel hybrid machine learning approach for occlusion removal based on Structural Similarity Index Measure (SSIM) and Principal Component Analysis (PCA), called SSIM_PCA. The proposed system comprises two stages. In the first stage, a Face Similar Matrix (FSM) guided by the Structural Similarity Index Measure is generated to provide the necessary information to recover from the lost regions of the face image. The FSM generates Related Face (RF) images similar to the probe image. In the second stage, these RF images are considered as related information and used as input data to generate eigenspaces using PCA to reconstruct the occluded face region exploiting the relationship between the occluded region and related face images, which contain relevant data to recover from the occluded area. Experimental results with three standard datasets viz. Caspeal-R1, IMFDB, and FEI have proven that the proposed method works well under illumination changes and occlusion of facial images.


VASA ◽  
2012 ◽  
Vol 41 (5) ◽  
pp. 333-342 ◽  
Author(s):  
Kirchberger ◽  
Finger ◽  
Müller-Bühl

Background: The Intermittent Claudication Questionnaire (ICQ) is a short questionnaire for the assessment of health-related quality of life (HRQOL) in patients with intermittent claudication (IC). The objective of this study was to translate the ICQ into German and to investigate the psychometric properties of the German ICQ version in patients with IC. Patients and methods: The original English version was translated using a forward-backward method. The resulting German version was reviewed by the author of the original version and an experienced clinician. Finally, it was tested for clarity with 5 German patients with IC. A sample of 81 patients were administered the German ICQ. The sample consisted of 58.0 % male patients with a median age of 71 years and a median IC duration of 36 months. Test of feasibility included completeness of questionnaires, completion time, and ratings of clarity, length and relevance. Reliability was assessed through a retest in 13 patients at 14 days, and analysis of Cronbach’s alpha for internal consistency. Construct validity was investigated using principal component analysis. Concurrent validity was assessed by correlating the ICQ scores with the Short Form 36 Health Survey (SF-36) as well as clinical measures. Results: The ICQ was completely filled in by 73 subjects (90.1 %) with an average completion time of 6.3 minutes. Cronbach’s alpha coefficient reached 0.75. Intra-class correlation for test-retest reliability was r = 0.88. Principal component analysis resulted in a 3 factor solution. The first factor explained 51.5 of the total variation and all items had loadings of at least 0.65 on it. The ICQ was significantly associated with the SF-36 and treadmill-walking distances whereas no association was found for resting ABPI. Conclusions: The German version of the ICQ demonstrated good feasibility, satisfactory reliability and good validity. Responsiveness should be investigated in further validation studies.


Sign in / Sign up

Export Citation Format

Share Document