Practical aspects of compositional data analysis

Author(s):  
Vera Pawlowsky-Glahn ◽  
Richardo A. Olea

In Chapter 6 we introduce additional aspects of the geostatistical approach presented in the preceding chapters that were not necessary for its theoretical development, but that are essential for the practical application of the method to compositional data. We discuss how to treat zeros in compositional data sets; how to model the required cross-covariances; how to compute expected values and estimation variances for the original, constrained variables; and how to build and interpret confidence intervals for estimated values. As mentioned in Section 2.1, data sets with many zeros are as troublesome in compositional analysis as they are in standard multivariate analysis. In our approach, the additional restriction for compositional data is that zero values are not admissible for modeling. The justification for this restriction can be given using arithmetic arguments. A transformation that uses logarithms cannot be performed on zero values. This is the case for the logratio transformation that leads to the definition of an additive logistic normal distribution, as introduced by Aitchison (1986, p. 113). It is also the case for the additive logistic skew-normal distribution defined in Mateu-Figueras et al. (1998), following previous results by Azzalini and Dalla Valle (1996). The centered logratio transformation and the family of multivariate Box-Cox transformations discussed in Andrews et al. (1971), Rayens and Srinivasan (1991), and Barceló- Vidal (1996) also call for the restriction of zero values. This restriction is certainly a wellspring of discussion, albeit surprisingly so, as nobody would complain about eliminating zeros either by simple suppression of samples or by substitution with reasonable values when dealing with a sample from a lognormal distribution in the univariate case. Recall that the logarithm of zero is undefined and the sample space of the lognormal distribution is the positive real line, excluding the origin. In order to present our position on how to deal with zeros as clearly as possible, let us assume that only one of our components has zeros in some of the samples. Those cases where more than one variable is affected can be analyzed by methods described below.

2021 ◽  
Vol 8 (1) ◽  
pp. 271-299
Author(s):  
Michael Greenacre

Compositional data are nonnegative data carrying relative, rather than absolute, information—these are often data with a constant-sum constraint on the sample values, for example, proportions or percentages summing to 1% or 100%, respectively. Ratios between components of a composition are important since they are unaffected by the particular set of components chosen. Logarithms of ratios (logratios) are the fundamental transformation in the ratio approach to compositional data analysis—all data thus need to be strictly positive, so that zero values present a major problem. Components that group together based on domain knowledge can be amalgamated (i.e., summed) to create new components, and this can alleviate the problem of data zeros. Once compositional data are transformed to logratios, regular univariate and multivariate statistical analysis can be performed, such as dimension reduction and clustering, as well as modeling. Alternative methodologies that come close to the ideals of the logratio approach are also considered, especially those that avoid the problem of data zeros, which is particularly acute in large bioinformatic data sets.


2021 ◽  
Vol 12 ◽  
Author(s):  
Michael Greenacre ◽  
Marina Martínez-Álvaro ◽  
Agustín Blasco

Microbiome and omics datasets are, by their intrinsic biological nature, of high dimensionality, characterized by counts of large numbers of components (microbial genes, operational taxonomic units, RNA transcripts, etc.). These data are generally regarded as compositional since the total number of counts identified within a sample is irrelevant. The central concept in compositional data analysis is the logratio transformation, the simplest being the additive logratios with respect to a fixed reference component. A full set of additive logratios is not isometric, that is they do not reproduce the geometry of all pairwise logratios exactly, but their lack of isometry can be measured by the Procrustes correlation. The reference component can be chosen to maximize the Procrustes correlation between the additive logratio geometry and the exact logratio geometry, and for high-dimensional data there are many potential references. As a secondary criterion, minimizing the variance of the reference component's log-transformed relative abundance values makes the subsequent interpretation of the logratios even easier. On each of three high-dimensional omics datasets the additive logratio transformation was performed, using references that were identified according to the abovementioned criteria. For each dataset the compositional data structure was successfully reproduced, that is the additive logratios were very close to being isometric. The Procrustes correlations achieved for these datasets were 0.9991, 0.9974, and 0.9902, respectively. We thus demonstrate, for high-dimensional compositional data, that additive logratios can provide a valid choice as transformed variables, which (a) are subcompositionally coherent, (b) explain 100% of the total logratio variance and (c) come measurably very close to being isometric. The interpretation of additive logratios is much simpler than the complex isometric alternatives and, when the variance of the log-transformed reference is very low, it is even simpler since each additive logratio can be identified with a corresponding compositional component.


2020 ◽  
Author(s):  
Luis P.V. Braga ◽  
Dina Feigenbaum

AbstractBackgroundCovid-19 cases data pose an enormous challenge to any analysis. The evaluation of such a global pandemic requires matching reports that follow different procedures and even overcoming some countries’ censorship that restricts publications.MethodsThis work proposes a methodology that could assist future studies. Compositional Data Analysis (CoDa) is proposed as the proper approach as Covid-19 cases data is compositional in nature. Under this methodology, for each country three attributes were selected: cumulative number of deaths (D); cumulative number of recovered patients(R); present number of patients (A).ResultsAfter the operation called closure, with c=1, a ternary diagram and Log-Ratio plots, as well as, compositional statistics are presented. Cluster analysis is then applied, splitting the countries into discrete groups.ConclusionsThis methodology can also be applied to other data sets such as countries, cities, provinces or districts in order to help authorities and governmental agencies to improve their actions to fight against a pandemic.


2016 ◽  
Vol 62 (8) ◽  
pp. 692-703 ◽  
Author(s):  
Gregory B. Gloor ◽  
Gregor Reid

A workshop held at the 2015 annual meeting of the Canadian Society of Microbiologists highlighted compositional data analysis methods and the importance of exploratory data analysis for the analysis of microbiome data sets generated by high-throughput DNA sequencing. A summary of the content of that workshop, a review of new methods of analysis, and information on the importance of careful analyses are presented herein. The workshop focussed on explaining the rationale behind the use of compositional data analysis, and a demonstration of these methods for the examination of 2 microbiome data sets. A clear understanding of bioinformatics methodologies and the type of data being analyzed is essential, given the growing number of studies uncovering the critical role of the microbiome in health and disease and the need to understand alterations to its composition and function following intervention with fecal transplant, probiotics, diet, and pharmaceutical agents.


Author(s):  
Thomas P. Quinn ◽  
Ionas Erb

AbstractIn the health sciences, many data sets produced by next-generation sequencing (NGS) only contain relative information because of biological and technical factors that limit the total number of nucleotides observed for a given sample. As mutually dependent elements, it is not possible to interpret any component in isolation, at least without normalization. The field of compositional data analysis (CoDA) has emerged with alternative methods for relative data based on log-ratio transforms. However, NGS data often contain many more features than samples, and thus require creative new ways to reduce the dimensionality of the data without sacrificing interpretability. The summation of parts, called amalgamation, is a practical way of reducing dimensionality, but can introduce a non-linear distortion to the data. We exploit this non-linearity to propose a powerful yet interpretable dimension reduction method. In this report, we present data-driven amalgamation as a new method and conceptual framework for reducing the dimensionality of compositional data. Unlike expert-driven amalgamation which requires prior domain knowledge, our data-driven amalgamation method uses a genetic algorithm to answer the question, “What is the best way to amalgamate the data to achieve the user-defined objective?”. We present a user-friendly R package, called amalgam, that can quickly find the optimal amalgamation to (a) preserve the distance between samples, or (b) classify samples as diseased or not. Our benchmark on 13 real data sets confirm that these amalgamations compete with the state-of-the-art unsupervised and supervised dimension reduction methods in terms of performance, but result in new variables that are much easier to understand: they are groups of features added together.


2021 ◽  
Author(s):  
Michael Greenacre ◽  
Marina Martinez-Alvaro ◽  
Agustin Blasco

Background: Microbiome and omics datasets are, by their intrinsic biological nature, of high dimensionality, characterized by counts of large numbers of components (microbial genes, operational taxonomic units, RNA transcripts, etc...). These data are generally regarded as compositional since the total number of counts identified within a sample are irrelevant. The central concept in compositional data analysis is the logratio transformation, the simplest being the additive logratios with respect to a fixed reference component. A full set of additive logratios is not isometric in the sense of reproducing the geometry of all pairwise logratios exactly, but their lack of isometry can be measured by the Procrustes correlation. The reference component can be chosen to maximize the Procrustes correlation between the additive logratio geometry and the exact logratio geometry, and for high-dimensional data there are many potential references. As a secondary criterion, minimizing the variance of the reference component's log-transformed relative abundance values makes the subsequent interpretation of the logratios even easier. Finally, it is preferable that the reference component not be a rare component but well populated, and substantive biological reasons might also guide the choice if several reference candidates are identified. Results: On each of three high-dimensional datasets the additive logratio transformation was performed, using references that were identified according to the abovementioned criteria.For each dataset the compositional data structure was successfully reproduced, that is the additive logratios were very close to being isometric. The Procrustes correlations achieved for these datasets were 0.9991, 0.9977 and 0.9997, respectively. In the third case, where the objective was to distinguish between three groups of samples, the approximation was made to the restricted logratio space of the between-group variance. Conclusions: We show that for high-dimensional compositional data additive logratios can provide a valid choice as transformed variables that are (1) subcompositionally coherent, (2) explaining 100% of the total logratio variance and (3) coming measurably very close to being isometric, that is approximating almost perfectly the exact logratio geometry. The interpretation of additive logratios is simple and, when the variance of the log-transformed reference is very low, it is made even simpler since each additive logratio can be identified with a corresponding compositional component.


2016 ◽  
Vol 27 (6) ◽  
pp. 1878-1891 ◽  
Author(s):  
Mehmet C Mert ◽  
Peter Filzmoser ◽  
Gottfried Endel ◽  
Ingrid Wilbacher

Compositional data analysis refers to analyzing relative information, based on ratios between the variables in a data set. Data from epidemiology are usually treated as absolute information in an analysis. We outline the differences in both approaches for univariate and multivariate statistical analyses, using illustrative data sets from Austrian districts. Not only the results of the analyses can differ, but in particular the interpretation differs. It is demonstrated that the compositional data analysis approach leads to new and interesting insights.


1991 ◽  
Vol 10 (3) ◽  
pp. 276-283 ◽  
Author(s):  
Einar Ramsli

The accuracy with which an industrial robot brings the load to a position and holds it there is perhaps the most important characteristic of an industrial robot. Many researchers have consequently been interested in this field during recent years. A common method for characterizing an industrial robot's ability to return to a position is to use the terms "accuracy" and "repeatability, " where accuracy characterizes the degree to which the actual measured value corresponds to a com manded value and repeatability the closeness of agreement between repeated measured values, under the same condi tions, to the same commanded value (ISO definitions). The normal approximation is regularly used when calculating the repeatability. A test on this assumption for six different industrial robots is reported in this article. Two approaches for this test are used: one looks at the shape of the frequency function for the repeatability figures measured, and the second uses a chi square test on the six data sets. The different tests show that there are small chances that the deviation of an industrial robot will follow a normal distribution. It seems to be a trend that the deviation has longer tails than the normal distribution. Simulation is used to elaborate on the consequences of the invalid assumption of normality in the definition of repeat ability. The conclusion is that it is reasonable to use the normal approximation when there is no strong evidence that the deviation distribution is negatively skewed.


Sign in / Sign up

Export Citation Format

Share Document