marginal data
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

2019 ◽  
Vol 27 (3) ◽  
pp. 388-396 ◽  
Author(s):  
Devin Caughey ◽  
Mallory Wang

Social scientists are frequently interested in how populations evolve over time. Creating poststratification weights for surveys, for example, requires information on the weighting variables’ joint distribution in the target population. Typically, however, population data are sparsely available across time periods. Even when population data are observed, the content and structure of the data—which variables are observed and whether their marginal or joint distributions are known—differ across time, in ways that preclude straightforward interpolation. As a consequence, survey weights are often based only on the small subset of auxiliary variables whose joint population distribution is observed regularly over time, and thus fail to take full advantage of auxiliary information. To address this problem, we develop a dynamic Bayesian ecological inference model for estimating multivariate categorical distributions from sparse, irregular, and noisy data on their marginal (or partially joint) distributions. Our approach combines (1) a Dirichlet sampling model for the observed margins conditional on the unobserved cell proportions; (2) a set of equations encoding the logical relationships among different population quantities; and (3) a Dirichlet transition model for the period-specific proportions that pools information across time periods. We illustrate this method by estimating annual U.S. phone-ownership rates by race and region based on population data irregularly available between 1930 and 1960. This approach may be useful in a wide variety of contexts where scholars wish to make dynamic ecological inferences about interior cells from marginal data. A new R package estsubpop implements the method.



Author(s):  
Johan Andrés Vélez-Henao ◽  
Claudia María Garcia-Mazo

Electricity data is one of the key factors in life cycle assessment (LCA). There are two different approaches to model electricity and to apply average or marginal data in LCA studies. Marginal data is used in consequential whereas average data is considered in attributional studies. The aim of this study is to provide the long-term marginal technology for electricity power generation in Colombia until 2030. This technology is one capable of responding to small changes in demand on the market and is an important issue when assessing the environmental impacts of providing electricity. Colombia is a developing country with a national power grid, which historically has been dominated by Hydropower rather than fossil fuels. This particularity makes Colombian national power grid vulnerable to climatic variations; therefore, the country needs to introduce renewable resources into the power grid. This study uses consequential life cycle assessment and data from Colombian national plans for capacity changes in the power grid. The results show that whereas marginal electricity technology would most probably be Hydropower, Wind and Solar power are projected to reach more than 1% of the national power grid by 2030.



2017 ◽  
Author(s):  
Jeffrey Rouder ◽  
Richard D. Morey

Although teaching Bayes' theorem is popular, the standard approach---targeting posterior distributions of parameters---may be improved. We advocate teaching Bayes' theorem in a ratio form where the posterior beliefs relative to the prior beliefs equals the conditional probability of data relative to the marginal probability of data. This form leads to an interpretation that the strength of evidence is relative predictive accuracy. With this approach, students are encouraged to view Bayes' theorem as an updating mechanism, to obtain a deeper appreciation of the role of the prior and of marginal data, and to view estimation and model comparison from a unified perspective.



10.12737/7813 ◽  
2015 ◽  
Vol 3 (1) ◽  
pp. 51-56
Author(s):  
Аверина ◽  
Tatyana Averina

Logical and convenient methods of economic theory often remain unclaimed in the area of management accounting and analysis. For example an obstacle to the use of marginal revenue and marginal cost comparison method may be difficulties associated with the assembly of these indicators’ equations. This paper contains an example of using regression analysis for separation the cost related to constant and variable components, for assembly of marginal data equations in order to determine the optimum volume of output.



2013 ◽  
Vol 175 (2) ◽  
pp. 132-141 ◽  
Author(s):  
Cristina Fuentes-Albero ◽  
Leonardo Melosi
Keyword(s):  


Author(s):  
Fernando V. Bonassi ◽  
Lingchong You ◽  
Mike West

In studies of dynamic molecular networks in systems biology, experiments are increasingly exploiting technologies such as flow cytometry to generate data on marginal distributions of a few network nodes at snapshots in time. For example, levels of intracellular expression of a few genes, or cell surface protein markers, can be assayed at a series of interim time points and assumed steady-states under experimentally stimulated growth conditions in small cellular systems. Such marginal data on a small number of cellular markers will typically carry very limited information on the parameters and structure of dynamic network models, though experiments will typically be designed to expose variation in cellular phenotypes that are inherently related to some aspects of model parametrization and structure. Our work addresses statistical questions of how to integrate such data with dynamic stochastic models in order to properly quantify the information—or lack of information—it carries relative to models assumed. We present a Bayesian computational strategy coupled with a novel approach to summarizing and numerically characterizing biological phenotypes that are represented in terms of the resulting sample distributions of cellular markers. We build on Bayesian simulation methods and mixture modeling to define the approach to linking mechanistic mathematical models of network dynamics to snapshot data, using a toggle switch example integrating simulated and real data as context.



2011 ◽  
Author(s):  
Cristina Fuentes-Albero ◽  
Leonardo Melosi
Keyword(s):  


2010 ◽  
Vol 43 (3) ◽  
pp. 611-616 ◽  
Author(s):  
Nicholas K. Sauter ◽  
Billy K. Poon

Constructing a model lattice to fit the observed Bragg diffraction pattern is straightforward for perfect samples, but indexing can be challenging when artifacts are present, such as poorly shaped spots, split crystals giving multiple closely aligned lattices and outright superposition of patterns from aggregated microcrystals. To optimize the lattice model against marginal data, refinement can be performed using a subset of the observations from which the poorly fitting spots have been discarded. Outliers are identified by assuming a Gaussian error distribution for the best-fitting spots and points diverging from this distribution are culled. The set of remaining observations produces a superior lattice model, while the rejected observations can be used to identify a second crystal lattice, if one is present. The prevalence of outliers provides a potentially useful measure of sample quality. The described procedures are implemented for macromolecular crystallography within the autoindexing programlabelit.index(http://cci.lbl.gov/labelit).





Sign in / Sign up

Export Citation Format

Share Document