scholarly journals Likelihood-free inference with neural compression of DES SV weak lensing map statistics

2020 ◽  
Vol 501 (1) ◽  
pp. 954-969
Author(s):  
Niall Jeffrey ◽  
Justin Alsing ◽  
François Lanusse

ABSTRACT In many cosmological inference problems, the likelihood (the probability of the observed data as a function of the unknown parameters) is unknown or intractable. This necessitates approximations and assumptions, which can lead to incorrect inference of cosmological parameters, including the nature of dark matter and dark energy, or create artificial model tensions. Likelihood-free inference covers a novel family of methods to rigorously estimate posterior distributions of parameters using forward modelling of mock data. We present likelihood-free cosmological parameter inference using weak lensing maps from the Dark Energy Survey (DES) Science Verification data, using neural data compression of weak lensing map summary statistics. We explore combinations of the power spectra, peak counts, and neural compressed summaries of the lensing mass map using deep convolution neural networks. We demonstrate methods to validate the inference process, for both the data modelling and the probability density estimation steps. Likelihood-free inference provides a robust and scalable alternative for rigorous large-scale cosmological inference with galaxy survey data (for DES, Euclid, and LSST). We have made our simulated lensing maps publicly available.

2020 ◽  
Vol 500 (1) ◽  
pp. 859-870
Author(s):  
Ben Moews ◽  
Morgan A Schmitz ◽  
Andrew J Lawler ◽  
Joe Zuntz ◽  
Alex I Malz ◽  
...  

ABSTRACT Cosmic voids and their corresponding redshift-projected mass densities, known as troughs, play an important role in our attempt to model the large-scale structure of the Universe. Understanding these structures enables us to compare the standard model with alternative cosmologies, constrain the dark energy equation of state, and distinguish between different gravitational theories. In this paper, we extend the subspace-constrained mean shift algorithm, a recently introduced method to estimate density ridges, and apply it to 2D weak lensing mass density maps from the Dark Energy Survey Y1 data release to identify curvilinear filamentary structures. We compare the obtained ridges with previous approaches to extract trough structure in the same data, and apply curvelets as an alternative wavelet-based method to constrain densities. We then invoke the Wasserstein distance between noisy and noiseless simulations to validate the denoising capabilities of our method. Our results demonstrate the viability of ridge estimation as a precursor for denoising weak lensing observables to recover the large-scale structure, paving the way for a more versatile and effective search for troughs.


Author(s):  
Dipak Munshi ◽  
Patrick Valageas

Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening mass. Since the lensing effects arise from deflections of the light rays due to fluctuations of the gravitational potential, they can be directly related to the underlying density field of the large-scale structures. Weak gravitational surveys are complementary to both galaxy surveys and cosmic microwave background observations as they probe unbiased nonlinear matter power spectra at medium redshift. Ongoing CMBR experiments such as WMAP and a future Planck satellite mission will measure the standard cosmological parameters with unprecedented accuracy. The focus of attention will then shift to understanding the nature of dark matter and vacuum energy: several recent studies suggest that lensing is the best method for constraining the dark energy equation of state. During the next 5 year period, ongoing and future weak lensing surveys such as the Joint Dark Energy Mission (JDEM; e.g. SNAP) or the Large-aperture Synoptic Survey Telescope will play a major role in advancing our understanding of the universe in this direction. In this review article, we describe various aspects of probing the matter power spectrum and the bispectrum and other related statistics with weak lensing surveys. This can be used to probe the background dynamics of the universe as well as the nature of dark matter and dark energy.


2018 ◽  
Vol 33 (34) ◽  
pp. 1845015
Author(s):  
Dragan Huterer

First, I summarize the current status of dark energy, including methods to use data to separate between general-relativity and modified-gravity scenarios for the accelerating universe. Then, I discuss recent results from the Dark Energy Survey, currently the world’s leading experiment mapping large-scale structure in the universe. Year-1 DES analysis performed in 2017 included the combination of galaxy clustering, cosmic shear, and their cross-correlation to impose constraints on key cosmological parameters, while upcoming Year-3 and -5 analyses will dramatically improve those constraints. I discuss some of the challenges in this complex analysis, its results, and the more general path forward toward better understanding of dark matter and dark energy in the universe. I also comment on the foremost tension in the field of cosmology today: between local measurements of the Hubble constant from type Ia supernovae, and global measurements from the cosmic microwave background anisotropies.


2020 ◽  
Vol 498 (3) ◽  
pp. 4060-4087
Author(s):  
M Gatti ◽  
C Chang ◽  
O Friedrich ◽  
B Jain ◽  
D Bacon ◽  
...  

ABSTRACT We present a simulated cosmology analysis using the second and third moments of the weak lensing mass (convergence) maps. The second moment, or variances, of the convergence as a function of smoothing scale contains information similar to standard shear two-point statistics. The third moment, or the skewness, contains additional non-Gaussian information. The analysis is geared towards the third year (Y3) data from the Dark Energy Survey (DES), but the methodology can be applied to other weak lensing data sets. We present the formalism for obtaining the convergence maps from the measured shear and for obtaining the second and third moments of these maps given partial sky coverage. We estimate the covariance matrix from a large suite of numerical simulations. We test our pipeline through a simulated likelihood analyses varying 5 cosmological parameters and 10 nuisance parameters and identify the scales where systematic or modelling uncertainties are not expected to affect the cosmological analysis. Our simulated likelihood analysis shows that the combination of second and third moments provides a 1.5 per cent constraint on S8 ≡ σ8(Ωm/0.3)0.5 for DES Year 3 data. This is 20 per cent better than an analysis using a simulated DES Y3 shear two-point statistics, owing to the non-Gaussian information captured by the inclusion of higher order statistics. This paper validates our methodology for constraining cosmology with DES Year 3 data, which will be presented in a subsequent paper.


2019 ◽  
Vol 485 (1) ◽  
pp. 69-87 ◽  
Author(s):  
C Stern ◽  
J P Dietrich ◽  
S Bocquet ◽  
D Applegate ◽  
J J Mohr ◽  
...  

2020 ◽  
Vol 102 (2) ◽  
Author(s):  
T. M. C. Abbott ◽  
M. Aguena ◽  
A. Alarcon ◽  
S. Allam ◽  
S. Allen ◽  
...  

2018 ◽  
Vol 478 (1) ◽  
pp. 592-610 ◽  
Author(s):  
B Hoyle ◽  
D Gruen ◽  
G M Bernstein ◽  
M M Rau ◽  
J De Vicente ◽  
...  

2020 ◽  
Vol 495 (4) ◽  
pp. 4860-4892 ◽  
Author(s):  
T de Jaeger ◽  
L Galbany ◽  
S González-Gaitán ◽  
R Kessler ◽  
A V Filippenko ◽  
...  

ABSTRACT Despite vast improvements in the measurement of the cosmological parameters, the nature of dark energy and an accurate value of the Hubble constant (H0) in the Hubble–Lemaître law remain unknown. To break the current impasse, it is necessary to develop as many independent techniques as possible, such as the use of Type II supernovae (SNe II). The goal of this paper is to demonstrate the utility of SNe II for deriving accurate extragalactic distances, which will be an asset for the next generation of telescopes where more-distant SNe II will be discovered. More specifically, we present a sample from the Dark Energy Survey Supernova Program (DES-SN) consisting of 15 SNe II with photometric and spectroscopic information spanning a redshift range up to 0.35. Combining our DES SNe with publicly available samples, and using the standard candle method (SCM), we construct the largest available Hubble diagram with SNe II in the Hubble flow (70 SNe II) and find an observed dispersion of 0.27 mag. We demonstrate that adding a colour term to the SN II standardization does not reduce the scatter in the Hubble diagram. Although SNe II are viable as distance indicators, this work points out important issues for improving their utility as independent extragalactic beacons: find new correlations, define a more standard subclass of SNe II, construct new SN II templates, and dedicate more observing time to high-redshift SNe II. Finally, for the first time, we perform simulations to estimate the redshift-dependent distance-modulus bias due to selection effects.


2020 ◽  
Vol 496 (2) ◽  
pp. 1307-1324
Author(s):  
Carlo Giocoli ◽  
Pierluigi Monaco ◽  
Lauro Moscardini ◽  
Tiago Castro ◽  
Massimo Meneghetti ◽  
...  

ABSTRACT The generation of simulated convergence maps is of key importance in fully exploiting weak lensing by large-scale structure (LSS) from which cosmological parameters can be derived. In this paper, we present an extension of the pinocchio code that produces catalogues of dark matter haloes so that it is capable of simulating weak lensing by Modify LSS into Large Scale Structures (LSS). Like wl-moka, the method starts with a random realization of cosmological initial conditions, creates a halo catalogue and projects it on to the past light-cone, and paints in haloes assuming parametric models for the mass density distribution within them. Large-scale modes that are not accounted for by the haloes are constructed using linear theory. We discuss the systematic errors affecting the convergence power spectra when Lagrangian perturbation theory at increasing order is used to displace the haloes within pinocchio, and how they depend on the grid resolution. Our approximate method is shown to be very fast when compared to full ray-tracing simulations from an N-body run and able to recover the weak lensing signal, at different redshifts, with a few percent accuracy. It also allows for quickly constructing weak lensing covariance matrices, complementing pinocchio’s ability of generating the cluster mass function and galaxy clustering covariances and thus paving the way for calculating cross-covariances between the different probes. This work advances these approximate methods as tools for simulating and analysing survey data for cosmological purposes.


Sign in / Sign up

Export Citation Format

Share Document