scholarly journals Cosmological forecast for non-Gaussian statistics in large-scale weak lensing surveys

2021 ◽  
Vol 2021 (01) ◽  
pp. 028-028 ◽  
Author(s):  
Dominik Zürcher ◽  
Janis Fluri ◽  
Raphael Sgier ◽  
Tomasz Kacprzak ◽  
Alexandre Refregier
2003 ◽  
Vol 17 (22n24) ◽  
pp. 4316-4320
Author(s):  
J. Qian

It is believed that non-Gaussian statistics of intermittency leads to anomalous scaling in turbulence, and self-similarity normal scaling corresponds to Gaussian assumptions. By a reasonable model of probability density function (PDF) of intermittent velocity increment, we demonstrate the possibility that non-Gaussian statistics may lead to self-similarity and normal scaling. The experimental facts of scaling-range exponents being anomalous, are not against a non-Gaussian self-similarity in the inertial range. In scaling ranges at experimental Reynolds numbers, viscous and large-scale effects are not negligible, the non-Gaussian self-similarity is broken due to viscous and large-scale effects, and anomalous scaling is observed.


2017 ◽  
Vol 599 ◽  
pp. A79 ◽  
Author(s):  
Austin Peel ◽  
Chieh-An Lin ◽  
François Lanusse ◽  
Adrienne Leonard ◽  
Jean-Luc Starck ◽  
...  

Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0de. In particular, we study how Camelus, a fast stochastic model for predicting peaks, can be applied to such large surveys. The algorithm avoids the need for time-costly N-body simulations, and its stochastic approach provides full PDF information of observables. Considering peaks with a signal-to-noise ratio ≥ 1, we measure the abundance histogram in a mock shear catalogue of approximately 5000 deg2 using a multiscale mass-map filtering technique. We constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation, a robust likelihood-free inference algorithm. Peak statistics yield a tight but significantly biased constraint in the σ8–Ωm plane, as measured by the width ΔΣ8 of the 1σ contour. We find Σ8 = σ8(Ωm/ 0.27)α = 0.77-0.05+0.06 with α = 0.75 for a flat ΛCDM model. The strong bias indicates the need to better understand and control the model systematics before applying it to a real survey of this size or larger. We perform a calibration of the model and compare results to those from the two-point correlation functions ξ± measured on the same field. We calibrate the ξ± result as well, since its contours are also biased, although not as severely as for peaks. In this case, we find for peaks Σ8 = 0.76-0.03+0.02 with α = 0.65, while for the combined ξ+ and ξ− statistics the values are Σ8 = 0.76-0.01+0.02 and α = 0.70. We conclude that the constraining power can therefore be comparable between the two weak-lensing observables in large-field surveys. Furthermore, the tilt in the σ8–Ωm degeneracy direction for peaks with respect to that of ξ± suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0de cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and ξ±.


2020 ◽  
Vol 497 (3) ◽  
pp. 2699-2714
Author(s):  
Xiao Fang (方啸) ◽  
Tim Eifler ◽  
Elisabeth Krause

ABSTRACT Accurate covariance matrices for two-point functions are critical for inferring cosmological parameters in likelihood analyses of large-scale structure surveys. Among various approaches to obtaining the covariance, analytic computation is much faster and less noisy than estimation from data or simulations. However, the transform of covariances from Fourier space to real space involves integrals with two Bessel integrals, which are numerically slow and easily affected by numerical uncertainties. Inaccurate covariances may lead to significant errors in the inference of the cosmological parameters. In this paper, we introduce a 2D-FFTLog algorithm for efficient, accurate, and numerically stable computation of non-Gaussian real-space covariances for both 3D and projected statistics. The 2D-FFTLog algorithm is easily extended to perform real-space bin-averaging. We apply the algorithm to the covariances for galaxy clustering and weak lensing for a Dark Energy Survey Year 3-like and a Rubin Observatory’s Legacy Survey of Space and Time Year 1-like survey, and demonstrate that for both surveys, our algorithm can produce numerically stable angular bin-averaged covariances with the flat sky approximation, which are sufficiently accurate for inferring cosmological parameters. The code CosmoCov for computing the real-space covariances with or without the flat-sky approximation is released along with this paper.


2019 ◽  
Vol 490 (4) ◽  
pp. 4688-4714 ◽  
Author(s):  
Matteo Rizzato ◽  
Karim Benabed ◽  
Francis Bernardeau ◽  
Fabien Lacasa

ABSTRACT We address key points for an efficient implementation of likelihood codes for modern weak lensing large-scale structure surveys. Specifically, we focus on the joint weak lensing convergence power spectrum–bispectrum probe and we tackle the numerical challenges required by a realistic analysis. Under the assumption of (multivariate) Gaussian likelihoods, we have developed a high performance code that allows highly parallelized prediction of the binned tomographic observables and of their joint non-Gaussian covariance matrix accounting for terms up to the six-point correlation function and supersample effects. This performance allows us to qualitatively address several interesting scientific questions. We find that the bispectrum provides an improvement in terms of signal-to-noise ratio (S/N) of about 10 per cent on top of the power spectrum, making it a non-negligible source of information for future surveys. Furthermore, we are capable to test the impact of theoretical uncertainties in the halo model used to build our observables; with presently allowed variations we conclude that the impact is negligible on the S/N. Finally, we consider data compression possibilities to optimize future analyses of the weak lensing bispectrum. We find that, ignoring systematics, five equipopulated redshift bins are enough to recover the information content of a Euclid-like survey, with negligible improvement when increasing to 10 bins. We also explore principal component analysis and dependence on the triangle shapes as ways to reduce the numerical complexity of the problem.


Author(s):  
Robin E Upham ◽  
Michael L Brown ◽  
Lee Whittaker

Abstract We investigate whether a Gaussian likelihood is sufficient to obtain accurate parameter constraints from a Euclid-like combined tomographic power spectrum analysis of weak lensing, galaxy clustering and their cross-correlation. Testing its performance on the full sky against the Wishart distribution, which is the exact likelihood under the assumption of Gaussian fields, we find that the Gaussian likelihood returns accurate parameter constraints. This accuracy is robust to the choices made in the likelihood analysis, including the choice of fiducial cosmology, the range of scales included, and the random noise level. We extend our results to the cut sky by evaluating the additional non-Gaussianity of the joint cut-sky likelihood in both its marginal distributions and dependence structure. We find that the cut-sky likelihood is more non-Gaussian than the full-sky likelihood, but at a level insufficient to introduce significant inaccuracy into parameter constraints obtained using the Gaussian likelihood. Our results should not be affected by the assumption of Gaussian fields, as this approximation only becomes inaccurate on small scales, which in turn corresponds to the limit in which any non-Gaussianity of the likelihood becomes negligible. We nevertheless compare against N-body weak lensing simulations and find no evidence of significant additional non-Gaussianity in the likelihood. Our results indicate that a Gaussian likelihood will be sufficient for robust parameter constraints with power spectra from Stage IV weak lensing surveys.


2015 ◽  
Vol 2015 (10) ◽  
pp. 036-036 ◽  
Author(s):  
Nicolas Tessore ◽  
Hans A. Winther ◽  
R. Benton Metcalf ◽  
Pedro G. Ferreira ◽  
Carlo Giocoli

2018 ◽  
Vol 2018 (10) ◽  
pp. 051-051 ◽  
Author(s):  
J. Fluri ◽  
T. Kacprzak ◽  
R. Sgier ◽  
A. Refregier ◽  
A. Amara
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document