parameter inference
Recently Published Documents


TOTAL DOCUMENTS

226
(FIVE YEARS 91)

H-INDEX

20
(FIVE YEARS 4)

2021 ◽  
Vol 922 (2) ◽  
pp. 259
Author(s):  
M. Millea ◽  
C. M. Daley ◽  
T-L. Chou ◽  
E. Anderes ◽  
P. A. R. Ade ◽  
...  

Abstract We perform the first simultaneous Bayesian parameter inference and optimal reconstruction of the gravitational lensing of the cosmic microwave background (CMB), using 100 deg2 of polarization observations from the SPTpol receiver on the South Pole Telescope. These data reach noise levels as low as 5.8 μK arcmin in polarization, which are low enough that the typically used quadratic estimator (QE) technique for analyzing CMB lensing is significantly suboptimal. Conversely, the Bayesian procedure extracts all lensing information from the data and is optimal at any noise level. We infer the amplitude of the gravitational lensing potential to be A ϕ = 0.949 ± 0.122 using the Bayesian pipeline, consistent with our QE pipeline result, but with 17% smaller error bars. The Bayesian analysis also provides a simple way to account for systematic uncertainties, performing a similar job as frequentist “bias hardening” or linear bias correction, and reducing the systematic uncertainty on A ϕ due to polarization calibration from almost half of the statistical error to effectively zero. Finally, we jointly constrain A ϕ along with A L, the amplitude of lensing-like effects on the CMB power spectra, demonstrating that the Bayesian method can be used to easily infer parameters both from an optimal lensing reconstruction and from the delensed CMB, while exactly accounting for the correlation between the two. These results demonstrate the feasibility of the Bayesian approach on real data, and pave the way for future analysis of deep CMB polarization measurements with SPT-3G, Simons Observatory, and CMB-S4, where improvements relative to the QE can reach 1.5 times tighter constraints on A ϕ and seven times lower effective lensing reconstruction noise.


2021 ◽  
Vol 2021 (11) ◽  
pp. 027
Author(s):  
Benedict Bahr-Kalus ◽  
Daniele Bertacca ◽  
Licia Verde ◽  
Alan Heavens

Abstract The peculiar motion of the observer, if not accurately accounted for, is bound to induce a well-defined clustering signal in the distribution of galaxies. This signal is related to the Kaiser rocket effect. Here we examine the amplitude and form of this effect, both analytically and numerically, and discuss possible implications for the analysis and interpretation of forthcoming cosmological surveys. For an idealistic cosmic variance dominated full-sky survey with a Gaussian selection function peaked at z ∼ 1.5 it is a > 5σ effect and it can in principle bias very significantly the inference of cosmological parameters, especially for primordial non-Gaussianity. For forthcoming surveys, with realistic masks and selection functions, the Kaiser rocket is not a significant concern for cosmological parameter inference except perhaps for primordial non-Gaussianity studies. However, it is a systematic effect, whose origin, nature and imprint on galaxy maps are well known and thus should be subtracted or mitigated. We present several approaches to do so.


Stats ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 762-775
Author(s):  
Anthony Ebert ◽  
Kerrie Mengersen ◽  
Fabrizio Ruggeri ◽  
Paul Wu

Approximate Bayesian computation is a likelihood-free inference method which relies on comparing model realisations to observed data with informative distance measures. We obtain functional data that are not only subject to noise along their y axis but also to a random warping along their x axis, which we refer to as the time axis. Conventional distances on functions, such as the L2 distance, are not informative under these conditions. The Fisher–Rao metric, previously generalised from the space of probability distributions to the space of functions, is an ideal objective function for aligning one function to another by warping the time axis. We assess the usefulness of alignment with the Fisher–Rao metric for approximate Bayesian computation with four examples: two simulation examples, an example about passenger flow at an international airport, and an example of hydrological flow modelling. We find that the Fisher–Rao metric works well as the objective function to minimise for alignment; however, once the functions are aligned, it is not necessarily the most informative distance for inference. This means that likelihood-free inference may require two distances: one for alignment and one for parameter inference.


2021 ◽  
Author(s):  
Anđela Davidović ◽  
Remy Chait ◽  
Gregory Batt ◽  
Jakob Ruess

AbstractUnderstanding and characterising biochemical processes inside single cells requires experimental platforms that allow one to perturb and observe the dynamics of such processes as well as computational methods to build and parameterise models from the collected data. Recent progress with experimental platforms and optogenetics has made it possible to expose each cell in an experiment to an individualised input and automatically record cellular responses over days with fine time resolution. However, methods to infer parameters of stochastic kinetic models from single-cell longitudinal data have generally been developed under the assumption that experimental data is sparse and that responses of cells to at most a few different input perturbations can be observed. Here, we investigate and compare different approaches for calculating parameter likelihoods of single-cell longitudinal data based on approximations of the chemical master equation (CME) with a particular focus on coupling the linear noise approximation (LNA) or moment closure methods to a Kalman filter. We show that, as long as cells are measured sufficiently frequently, coupling the LNA to a Kalman filter allows one to accurately approximate likelihoods and to infer model parameters from data even in cases where the LNA provides poor approximations of the CME. Furthermore, the computational cost of filtering-based iterative likelihood evaluation scales advantageously in the number of measurement times and different input perturbations and is thus ideally suited for data obtained from modern experimental platforms. To demonstrate the practical usefulness of these results, we perform an experiment in which single cells, equipped with an optogenetic gene expression system, are exposed to various different light-input sequences and measured at several hundred time points and use parameter inference based on iterative likelihood evaluation to parameterise a stochastic model of the system.Author summaryA common result for the modelling of cellular processes is that available data is not sufficiently rich to uniquely determine the biological mechanism or even just to ensure identifiability of parameters of a given model. Perturbing cellular processes with informative input stimuli and measuring dynamical responses may alleviate this problem. With the development of novel experimental platforms, we are now in a position to parallelise such perturbation experiments at the single cell level. This raises a plethora of new questions. Is it more informative to diversify input perturbations but to observe only few cells for each input or should we rather ensure that many cells are observed for only few inputs? How can we calculate likelihoods and infer parameters of stochastic kinetic models from data sets in which each cell receives a different input perturbation? How does the computational efficiency of parameter inference methods scale with the number of inputs and the number of measurement times? Are there approaches that are particularly well-suited for such data sets? In this paper, we investigate these questions using the CcaS/CcaR optogenetic system driving the expression of a fluorescent reporter protein as primary case study.


Sign in / Sign up

Export Citation Format

Share Document