scholarly journals When tension is just a fluctuation

2021 ◽  
Vol 647 ◽  
pp. L5
Author(s):  
B. Joachimi ◽  
F. Köhlinger ◽  
W. Handley ◽  
P. Lemos

Summary statistics of likelihood, such as Bayesian evidence, offer a principled way of comparing models and assessing tension between, or within, the results of physical experiments. Noisy realisations of the data induce scatter in these model comparison statistics. For a realistic case of cosmological inference from large-scale structure, we show that the logarithm of the Bayes factor attains scatter of order unity, increasing significantly with stronger tension between the models under comparison. We develop an approximate procedure that quantifies the sampling distribution of the evidence at a small additional computational cost and apply it to real data to demonstrate the impact of the scatter, which acts to reduce the significance of any model discrepancies. Data compression is highlighted as a potential avenue to suppressing noise in the evidence to negligible levels, with a proof of concept demonstrated using Planck cosmic microwave background data.

2016 ◽  
Author(s):  
R. J. Haarsma ◽  
M. Roberts ◽  
P. L. Vidale ◽  
C. A. Senior ◽  
A. Bellucci ◽  
...  

Abstract. Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest the possibility for significant changes in both large-scale aspects of circulation, as well as improvements in small-scale processes and extremes. However, such high resolution global simulations at climate time scales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centers and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other MIPs. Increases in High Performance Computing (HPC) resources, as well as the revised experimental design for CMIP6, now enables a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility to extend to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulation. HighResMIP thereby focuses on one of the CMIP6 broad questions: “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.


2020 ◽  
Vol 633 ◽  
pp. L10 ◽  
Author(s):  
Tilman Tröster ◽  
Ariel. G. Sánchez ◽  
Marika Asgari ◽  
Chris Blake ◽  
Martín Crocce ◽  
...  

We reanalyse the anisotropic galaxy clustering measurement from the Baryon Oscillation Spectroscopic Survey (BOSS), demonstrating that using the full shape information provides cosmological constraints that are comparable to other low-redshift probes. We find Ωm = 0.317+0.015−0.019, σ8 = 0.710±0.049, and h = 0.704 ± 0.024 for flat ΛCDM cosmologies using uninformative priors on Ωch2, 100θMC, ln1010As, and ns, and a prior on Ωbh2 that is much wider than current constraints. We quantify the agreement between the Planck 2018 constraints from the cosmic microwave background and BOSS, finding the two data sets to be consistent within a flat ΛCDM cosmology using the Bayes factor as well as the prior-insensitive suspiciousness statistic. Combining two low-redshift probes, we jointly analyse the clustering of BOSS galaxies with weak lensing measurements from the Kilo-Degree Survey (KV450). The combination of BOSS and KV450 improves the measurement by up to 45%, constraining σ8 = 0.702 ± 0.029 and S8 = σ8 Ωm/0.3 = 0.728 ± 0.026. Over the full 5D parameter space, the odds in favour of a single cosmology describing galaxy clustering, lensing, and the cosmic microwave background are 7 ± 2. The suspiciousness statistic signals a 2.1 ± 0.3σ tension between the combined low-redshift probes and measurements from the cosmic microwave background.


2016 ◽  
Vol 9 (11) ◽  
pp. 4185-4208 ◽  
Author(s):  
Reindert J. Haarsma ◽  
Malcolm J. Roberts ◽  
Pier Luigi Vidale ◽  
Catherine A. Senior ◽  
Alessio Bellucci ◽  
...  

Abstract. Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Yang Chen ◽  
Weimin Yu ◽  
Yinsheng Li ◽  
Zhou Yang ◽  
Limin Luo ◽  
...  

Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA) prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV) Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA). Experiments with both simulated and real data validate the good performance of the proposed restoration.


2019 ◽  
Vol 21 (5) ◽  
pp. 1756-1765
Author(s):  
Bo Sun ◽  
Liang Chen

Abstract Mapping of expression quantitative trait loci (eQTLs) facilitates interpretation of the regulatory path from genetic variants to their associated disease or traits. High-throughput sequencing of RNA (RNA-seq) has expedited the exploration of these regulatory variants. However, eQTL mapping is usually confronted with the analysis challenges caused by overdispersion and excessive dropouts in RNA-seq. The heavy-tailed distribution of gene expression violates the assumption of Gaussian distributed errors in linear regression for eQTL detection, which results in increased Type I or Type II errors. Applying rank-based inverse normal transformation (INT) can make the expression values more normally distributed. However, INT causes information loss and leads to uninterpretable effect size estimation. After comprehensive examination of the impact from overdispersion and excessive dropouts, we propose to apply a robust model, quantile regression, to map eQTLs for genes with high degree of overdispersion or large number of dropouts. Simulation studies show that quantile regression has the desired robustness to outliers and dropouts, and it significantly improves eQTL mapping. From a real data analysis, the most significant eQTL discoveries differ between quantile regression and the conventional linear model. Such discrepancy becomes more prominent when the dropout effect or the overdispersion effect is large. All the results suggest that quantile regression provides more reliable and accurate eQTL mapping than conventional linear models. It deserves more attention for the large-scale eQTL mapping.


2011 ◽  
Vol 72 (3) ◽  
pp. 493-509 ◽  
Author(s):  
Hong Jiao ◽  
Junhui Liu ◽  
Kathleen Haynie ◽  
Ada Woo ◽  
Jerry Gorham

This study explored the impact of partial credit scoring of one type of innovative items (multiple-response items) in a computerized adaptive version of a large-scale licensure pretest and operational test settings. The impacts of partial credit scoring on the estimation of the ability parameters and classification decisions in operational test settings were explored in one real data analysis and two simulation studies when two different polytomous scoring algorithms, automated polytomous scoring and rater-generated polytomous scoring, were applied. For the real data analyses, the ability estimates from dichotomous and polytomous scoring were highly correlated; the classification consistency between different scoring algorithms was nearly perfect. Information distribution changed slightly in the operational item bank. In the two simulation studies comparing each polytomous scoring with dichotomous scoring, the ability estimates resulting from polytomous scoring had slightly higher measurement precision than those resulting from dichotomous scoring. The practical impact related to classification decision was minor because of the extremely small number of items that could be scored polytomously in this current study.


Author(s):  
Jing Li ◽  
Xiaorun Li ◽  
Liaoying Zhao

The minimization problem of reconstruction error over large hyperspectral image data is one of the most important problems in unsupervised hyperspectral unmixing. A variety of algorithms based on nonnegative matrix factorization (NMF) have been proposed in the literature to solve this minimization problem. One popular optimization method for NMF is the projected gradient descent (PGD). However, as the algorithm must compute the full gradient on the entire dataset at every iteration, the PGD suffers from high computational cost in the large-scale real hyperspectral image. In this paper, we try to alleviate this problem by introducing a mini-batch gradient descent-based algorithm, which has been widely used in large-scale machine learning. In our method, the endmember can be updated pixel set by pixel set while abundance can be updated band set by band set. Thus, the computational cost is lowered to a certain extent. The performance of the proposed algorithm is quantified in the experiment on synthetic and real data.


2014 ◽  
Vol 10 (S306) ◽  
pp. 51-53
Author(s):  
Sebastian Dorn ◽  
Erandy Ramirez ◽  
Kerstin E. Kunze ◽  
Stefan Hofmann ◽  
Torsten A. Enßlin

AbstractThe presence of multiple fields during inflation might seed a detectable amount of non-Gaussianity in the curvature perturbations, which in turn becomes observable in present data sets like the cosmic microwave background (CMB) or the large scale structure (LSS). Within this proceeding we present a fully analytic method to infer inflationary parameters from observations by exploiting higher-order statistics of the curvature perturbations. To keep this analyticity, and thereby to dispense with numerically expensive sampling techniques, a saddle-point approximation is introduced whose precision has been validated for a numerical toy example. Applied to real data, this approach might enable to discriminate among the still viable models of inflation.


2018 ◽  
Author(s):  
Robert Kofler

AbstractIn mammals and in invertebrates the proliferation of a newly invading transposable element (TE) is thought to be stopped by a random insertion of one member of the invading TE family into a piRNA cluster. This view is known as the trap model. Here we explore the dynamics of TE invasions under the trap model using large-scale computer simulations. We found that piRNA clusters confer a substantial benefit, effectively preventing extinction of host populations from an uncontrollable proliferation of deleterious TEs. We show that TE invasions under the trap model consists of three distinct phases: first the TE rapidly amplifies within the population, next TE proliferation is stopped by segregating cluster insertions and finally the TE is permanently inactivated by fixation of a cluster insertion. Suppression by segregating cluster insertions is unstable and bursts of TE activity may yet occur. The transpositon rate and the population size mostly influence the length of the phases but not the amount of TEs accumulating during an invasion. Solely the size of piRNA clusters was identified as a major factor influencing TE abundance. Investigating the impact of different cluster architectures we found that a single non-recombining cluster (e.g. the somatic cluster flamenco in Drosophila) is more efficient in stopping invasions than clusters distributed over several chromosomes (e.g germline cluster in Drosophila). With the somatic architecture fewer TEs accumulate during an invasion and fewer cluster insertions are required to stop the TE. The inefficiency of the germline architecture stems from recombination among cluster sites which makes it necessary that each diploid carries, on the average, four cluster insertions, such that most individuals will end up with at least one cluster insertion. Surprisingly we found that negative selection in a model with piRNA clusters can lead to a novel equilibrium state, where TE copy numbers remain stable despite only some individuals in a population carrying a cluster insertion. Finally when applying our approach to real data from Drosophila melanogaster we found that the trap model reasonably well accounts for the abundance of germline TEs but not of somatic TEs. The abundance of somatic TEs, such as gypsy, is much lower than expected.


2020 ◽  
Vol 8 (12) ◽  
pp. 985
Author(s):  
Vincent Gruwez ◽  
Corrado Altomare ◽  
Tomohiro Suzuki ◽  
Maximilian Streicher ◽  
Lorenzo Cappietti ◽  
...  

Three open source wave models are applied in 2DV to reproduce a large-scale wave flume experiment of bichromatic wave transformations over a steep-sloped dike with a mildly-sloped and very shallow foreshore: (i) the Reynolds-averaged Navier–Stokes equations solver interFoam of OpenFOAM® (OF), (ii) the weakly compressible smoothed particle hydrodynamics model DualSPHysics (DSPH) and (iii) the non-hydrostatic nonlinear shallow water equations model SWASH. An inter-model comparison is performed to determine the (standalone) applicability of the three models for this specific case, which requires the simulation of many processes simultaneously, including wave transformations over the foreshore and wave-structure interactions with the dike, promenade and vertical wall. A qualitative comparison is done based on the time series of the measured quantities along the wave flume, and snapshots of bore interactions on the promenade and impacts on the vertical wall. In addition, model performance and pattern statistics are employed to quantify the model differences. The results show that overall, OF provides the highest model skill, but has the highest computational cost. DSPH is shown to have a reduced model performance, but still comparable to OF and for a lower computational cost. Even though SWASH is a much more simplified model than both OF and DSPH, it is shown to provide very similar results: SWASH exhibits an equal capability to estimate the maximum quasi-static horizontal impact force with the highest computational efficiency, but does have an important model performance decrease compared to OF and DSPH for the force impulse.


Sign in / Sign up

Export Citation Format

Share Document