scholarly journals Improving PET Receptor Binding Estimates from Logan Plots Using Principal Component Analysis

2007 ◽  
Vol 28 (4) ◽  
pp. 852-865 ◽  
Author(s):  
Aniket Joshi ◽  
Jeffrey A Fessler ◽  
Robert A Koeppe

This work reports a principal component analysis (PCA)-based approach for reducing bias in distribution volume ratio ( DVR) estimates from Logan plots in positron emission tomography (PET). Comparison has been made of all existing bias-removal methods with the proposed PCA method, for both single-estimate PET studies and intervention studies where pre- and post-intervention estimates are made. Bias in Logan-based DVR estimates is because of the noise in the PET timeactivity curves (TACs) that propagates as correlated errors in dependent and independent variables of the Logan equation. Intervention studies show this same bias but also higher variance in DVR estimates. In this work, noise in the TACs was reduced by fitting the curves to a low-dimension PCA-based linear model, leading to reduced bias and variance in DVR. For validating the approach, TACs with realistic noise were simulated for a 11C-labeled tracer with carfentanil (CFN)-like kinetics for both single-measurement and intervention studies. Principal component analysis and existing methods were applied to the simulated data and their performance was compared by statistical analysis. The results indicated that existing methods either removed only part of the bias or reduced bias at the expense of precision. The proposed method removed ∼90% of the bias while also improving precision in both single- and dual-measurement simulations. After validation of the proposed method in simulations, PCA, along with the existing methods, was applied to human [11C]CFN data acquired for both single estimation of DVR and dual-estimation intervention studies. Similar results were observed in human scans as were seen in the simulation studies.

Cancers ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 2342
Author(s):  
Corentin Martens ◽  
Olivier Debeir ◽  
Christine Decaestecker ◽  
Thierry Metens ◽  
Laetitia Lebrun ◽  
...  

Recent works have demonstrated the added value of dynamic amino acid positron emission tomography (PET) for glioma grading and genotyping, biopsy targeting, and recurrence diagnosis. However, most of these studies are based on hand-crafted qualitative or semi-quantitative features extracted from the mean time activity curve within predefined volumes. Voxelwise dynamic PET data analysis could instead provide a better insight into intra-tumor heterogeneity of gliomas. In this work, we investigate the ability of principal component analysis (PCA) to extract relevant quantitative features from a large number of motion-corrected [S-methyl-11C]methionine ([11C]MET) PET frames. We first demonstrate the robustness of our methodology to noise by means of numerical simulations. We then build a PCA model from dynamic [11C]MET acquisitions of 20 glioma patients. In a distinct cohort of 13 glioma patients, we compare the parametric maps derived from our PCA model to these provided by the classical one-compartment pharmacokinetic model (1TCM). We show that our PCA model outperforms the 1TCM to distinguish characteristic dynamic uptake behaviors within the tumor while being less computationally expensive and not requiring arterial sampling. Such methodology could be valuable to assess the tumor aggressiveness locally with applications for treatment planning and response evaluation. This work further supports the added value of dynamic over static [11C]MET PET in gliomas.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 548 ◽  
Author(s):  
Yuqing Sun ◽  
Jun Niu

Hydrological regionalization is a useful step in hydrological modeling and prediction. The regionalization is not always straightforward, however, due to the lack of long-term hydrological data and the complex multi-scale variability features embedded in the data. This study examines the multiscale soil moisture variability for the simulated data on a grid cell base obtained from a large-scale hydrological model, and clusters the grid-cell based soil moisture data using wavelet-based multiscale entropy and principal component analysis, over the Xijiang River basin in South China, for the period of 2002–2010. The effective regionalization, for 169 grid cells with the special resolution of 0.5° × 0.5°, produced homogeneous groups based on the pattern of wavelet-based entropy information. Four distinct modes explain 80.14% of the total embedded variability of the transformed wavelet power across different timescales. Moreover, the possible implications of the regionalization results for local hydrological applications, such as parameter estimation for an ungagged catchment and designing a uniform prediction strategy for a sub-area in a large-scale basin, are discussed.


2014 ◽  
Vol 556-562 ◽  
pp. 4317-4320
Author(s):  
Qiang Zhang ◽  
Li Ping Liu ◽  
Chao Liu

As a zero-emission mode of transportation, an increasing number of Electric Vehicles (EV) have come into use in our daily lives. The EV charging station is an important component of the Smart Grid which is now facing the challenges of big data. This paper presents a data compression and reconstruction method based on the technique of Principal Component Analysis (PCA). The data reconstruction error Normalized Absolute Percent Error (NAPE) is taken into consideration to balance the compression ratio and data reconstruction quality. By using the simulated data, the effectiveness of data compression and reconstruction for EV charging stations are verified.


2017 ◽  
Vol 78 (4) ◽  
pp. 708-712 ◽  
Author(s):  
Tenko Raykov ◽  
George A. Marcoulides ◽  
Tenglong Li

This note extends the results in the 2016 article by Raykov, Marcoulides, and Li to the case of correlated errors in a set of observed measures subjected to principal component analysis. It is shown that when at least two measures are fallible, the probability is zero for any principal component—and in particular for the first principal component—to be error-free. In conjunction with the findings in Raykov et al., it is concluded that in practice no principal component can be perfectly reliable for a set of observed variables that are not all free of measurement error, whether or not their error terms correlate, and hence no principal component can practically be error-free.


2019 ◽  
Author(s):  
Florian Wagner ◽  
Dalia Barkley ◽  
Itai Yanai

AbstractSingle-cell RNA-Seq measurements are commonly affected by high levels of technical noise, posing challenges for data analysis and visualization. A diverse array of methods has been proposed to computationally remove noise by sharing information across similar cells or genes, however their respective accuracies have been difficult to establish. Here, we propose a simple denoising strategy based on principal component analysis (PCA). We show that while PCA performed on raw data is biased towards highly expressed genes, this bias can be mitigated with a cell aggregation step, allowing the recovery of denoised expression values for both highly and lowly expressed genes. We benchmark our resulting ENHANCE algorithm and three previously described methods on simulated data that closely mimic real datasets, showing that ENHANCE provides the best overall denoising accuracy, recovering modules of co-expressed genes and cell subpopulations. Implementations of our algorithm are available at https://github.com/yanailab/enhance.


Mathematics ◽  
2018 ◽  
Vol 6 (11) ◽  
pp. 269 ◽  
Author(s):  
Sergio Camiz ◽  
Valério Pillar

The identification of a reduced dimensional representation of the data is among the main issues of exploratory multidimensional data analysis and several solutions had been proposed in the literature according to the method. Principal Component Analysis (PCA) is the method that has received the largest attention thus far and several identification methods—the so-called stopping rules—have been proposed, giving very different results in practice, and some comparative study has been carried out. Some inconsistencies in the previous studies led us to try to fix the distinction between signal from noise in PCA—and its limits—and propose a new testing method. This consists in the production of simulated data according to a predefined eigenvalues structure, including zero-eigenvalues. From random populations built according to several such structures, reduced-size samples were extracted and to them different levels of random normal noise were added. This controlled introduction of noise allows a clear distinction between expected signal and noise, the latter relegated to the non-zero eigenvalues in the samples corresponding to zero ones in the population. With this new method, we tested the performance of ten different stopping rules. Of every method, for every structure and every noise, both power (the ability to correctly identify the expected dimension) and type-I error (the detection of a dimension composed only by noise) have been measured, by counting the relative frequencies in which the smallest non-zero eigenvalue in the population was recognized as signal in the samples and that in which the largest zero-eigenvalue was recognized as noise, respectively. This way, the behaviour of the examined methods is clear and their comparison/evaluation is possible. The reported results show that both the generalization of the Bartlett’s test by Rencher and the Bootstrap method by Pillar result much better than all others: both are accounted for reasonable power, decreasing with noise, and very good type-I error. Thus, more than the others, these methods deserve being adopted.


2010 ◽  
Vol 08 (06) ◽  
pp. 995-1011 ◽  
Author(s):  
HAO ZHENG ◽  
HONGWEI WU

Metagenomics is an emerging field in which the power of genomic analysis is applied to an entire microbial community, bypassing the need to isolate and culture individual microbial species. Assembling of metagenomic DNA fragments is very much like the overlap-layout-consensus procedure for assembling isolated genomes, but is augmented by an additional binning step to differentiate scaffolds, contigs and unassembled reads into various taxonomic groups. In this paper, we employed n-mer oligonucleotide frequencies as the features and developed a hierarchical classifier (PCAHIER) for binning short (≤ 1,000 bps) metagenomic fragments. The principal component analysis was used to reduce the high dimensionality of the feature space. The hierarchical classifier consists of four layers of local classifiers that are implemented based on the linear discriminant analysis. These local classifiers are responsible for binning prokaryotic DNA fragments into superkingdoms, of the same superkingdom into phyla, of the same phylum into genera, and of the same genus into species, respectively. We evaluated the performance of the PCAHIER by using our own simulated data sets as well as the widely used simHC synthetic metagenome data set from the IMG/M system. The effectiveness of the PCAHIER was demonstrated through comparisons against a non-hierarchical classifier, and two existing binning algorithms (TETRA and Phylopythia).


Sign in / Sign up

Export Citation Format

Share Document