scholarly journals Cosmological parameter estimation from large-scale structure deep learning

2020 ◽  
Vol 63 (11) ◽  
Author(s):  
ShuYang Pan ◽  
MiaoXin Liu ◽  
Jaime Forero-Romero ◽  
Cristiano G. Sabiu ◽  
ZhiGang Li ◽  
...  
2014 ◽  
Vol 2014 (01) ◽  
pp. 042-042 ◽  
Author(s):  
Enea Di Dio ◽  
Francesco Montanari ◽  
Ruth Durrer ◽  
Julien Lesgourgues

2019 ◽  
Vol 489 (3) ◽  
pp. 3385-3402 ◽  
Author(s):  
Konstantinos Tanidis ◽  
Stefano Camera

ABSTRACT We develop a cosmological parameter estimation code for (tomographic) angular power spectra analyses of galaxy number counts, for which we include, for the first time, redshift-space distortions (RSDs) in the Limber approximation. This allows for a speed-up in computation time, and we emphasize that only angular scales where the Limber approximation is valid are included in our analysis. Our main result shows that a correct modelling of RSD is crucial not to bias cosmological parameter estimation. This happens not only for spectroscopy-detected galaxies, but even in the case of galaxy surveys with photometric redshift estimates. Moreover, a correct implementation of RSD is especially valuable in alleviating the degeneracy between the amplitude of the underlying matter power spectrum and the galaxy bias. We argue that our findings are particularly relevant for present and planned observational campaigns, such as the Euclid satellite or the Square Kilometre Array, which aim at studying the cosmic large-scale structure and trace its growth over a wide range of redshifts and scales.


2019 ◽  
Vol 491 (4) ◽  
pp. 4869-4883 ◽  
Author(s):  
Konstantinos Tanidis ◽  
Stefano Camera ◽  
David Parkinson

ABSTRACT Following on our purpose of developing a unified pipeline for large-scale structure data analysis with angular power spectra, we now include the weak lensing effect of magnification bias on galaxy clustering in a publicly available, modular parameter estimation code. We thus forecast constraints on the parameters of the concordance cosmological model, dark energy, and modified gravity theories from galaxy clustering tomographic angular power spectra. We find that a correct modelling of magnification is crucial not to bias the parameter estimation, especially in the case of deep galaxy surveys. Our case study adopts specifications of the Evolutionary Map of the Universe, which is a full-sky, deep radio-continuum survey, expected to probe the Universe up to redshift z ∼ 6. We assume the Limber approximation, and include magnification bias on top of density fluctuations and redshift-space distortions. By restricting our analysis to the regime where the Limber approximation holds true, we significantly minimize the computational time needed, compared to that of the exact calculation. We also show that there is a trend for more biased parameter estimates from neglecting magnification when the redshift bins are very wide. We conclude that this result implies a strong dependence on the lensing contribution, which is an integrated effect and becomes dominant when wide redshift bins are considered. Finally, we note that instead of being considered a contaminant, magnification bias encodes important cosmological information, and its inclusion leads to an alleviation of its degeneracy between the galaxy bias and the amplitude normalization of the matter fluctuations.


2004 ◽  
Vol 13 (08) ◽  
pp. 1661-1668 ◽  
Author(s):  
CAROLINA J. ODMAN ◽  
MIKE HOBSON ◽  
ANTHONY LASENBY ◽  
ALESSANDRO MELCHIORRI

Most cosmological parameter estimations are based on the same set of observations and are therefore not independent. Here, we test the consistency of parameter estimations using a combination of large-scale structure and supernovae data, without cosmic microwave background (CMB) data. We combine observations from the IRAS 1.2 Jy and Las Campanas redshift surveys, galaxy peculiar velocities and measurements of type Ia supernovae to obtain [Formula: see text], Ωm=0.28±0.05 and [Formula: see text] in agreement with the constraints from observations of the CMB anisotropies by the WMAP satellite. We also compare results from different subsets of data in order to investigate the effect of priors and residual errors in the data. We find that some parameters are consistently well constrained whereas others are consistently ill-determined, or even yield poorly consistent results, thereby illustrating the importance of priors and data contributions.


2014 ◽  
Vol 59 (1) ◽  
pp. 79-92
Author(s):  
Alexander Becker

Wie erlebt der Hörer Jazz? Bei dieser Frage geht es unter anderem um die Art und Weise, wie Jazz die Zeit des Hörens gestaltet. Ein an klassischer Musik geschultes Ohr erwartet von musikalischer Zeitgestaltung, den zeitlichen Rahmen, der durch Anfang und Ende gesetzt ist, von innen heraus zu strukturieren und neu zu konstituieren. Doch das ist keine Erwartung, die dem Jazz gerecht wird. Im Jazz wird der Moment nicht im Hinblick auf ein Ziel gestaltet, das von einer übergeordneten Struktur bereitgestellt wird, sondern so, dass er den Bewegungsimpuls zum nächsten Moment weiterträgt. Wie wirkt sich dieses Prinzip der Zeitgestaltung auf die musikalische Form im Großen aus? Der Aufsatz untersucht diese Frage anhand von Beispielen, an denen sich der Weg der Transformation von einer klassischen zu einer dem Jazz angemessenen Form gut nachverfolgen lässt.<br><br>How do listeners experience Jazz? This is a question also about how Jazz music organizes the listening time. A classically educated listener expects a piece of music to structure, unify and thereby re-constitute the externally given time frame. Such an expectation is foreign to Jazz music which doesn’t relate the moment to a goal provided by a large scale structure. Rather, one moment is carried on to the next, preserving the stimulus potentially ad infinitum. How does such an organization of time affect the large scale form? The paper tries to answer this question by analyzing two examples which permit to trace the transformation of a classical form into a form germane to Jazz music.


Author(s):  
Marta B. Silva ◽  
Ely D. Kovetz ◽  
Garrett K. Keating ◽  
Azadeh Moradinezhad Dizgah ◽  
Matthieu Bethermin ◽  
...  

AbstractThis paper outlines the science case for line-intensity mapping with a space-borne instrument targeting the sub-millimeter (microwaves) to the far-infrared (FIR) wavelength range. Our goal is to observe and characterize the large-scale structure in the Universe from present times to the high redshift Epoch of Reionization. This is essential to constrain the cosmology of our Universe and form a better understanding of various mechanisms that drive galaxy formation and evolution. The proposed frequency range would make it possible to probe important metal cooling lines such as [CII] up to very high redshift as well as a large number of rotational lines of the CO molecule. These can be used to trace molecular gas and dust evolution and constrain the buildup in both the cosmic star formation rate density and the cosmic infrared background (CIB). Moreover, surveys at the highest frequencies will detect FIR lines which are used as diagnostics of galaxies and AGN. Tomography of these lines over a wide redshift range will enable invaluable measurements of the cosmic expansion history at epochs inaccessible to other methods, competitive constraints on the parameters of the standard model of cosmology, and numerous tests of dark matter, dark energy, modified gravity and inflation. To reach these goals, large-scale structure must be mapped over a wide range in frequency to trace its time evolution and the surveyed area needs to be very large to beat cosmic variance. Only a space-borne mission can properly meet these requirements.


2021 ◽  
Vol 502 (3) ◽  
pp. 3976-3992
Author(s):  
Mónica Hernández-Sánchez ◽  
Francisco-Shu Kitaura ◽  
Metin Ata ◽  
Claudio Dalla Vecchia

ABSTRACT We investigate higher order symplectic integration strategies within Bayesian cosmic density field reconstruction methods. In particular, we study the fourth-order discretization of Hamiltonian equations of motion (EoM). This is achieved by recursively applying the basic second-order leap-frog scheme (considering the single evaluation of the EoM) in a combination of even numbers of forward time integration steps with a single intermediate backward step. This largely reduces the number of evaluations and random gradient computations, as required in the usual second-order case for high-dimensional cases. We restrict this study to the lognormal-Poisson model, applied to a full volume halo catalogue in real space on a cubical mesh of 1250 h−1 Mpc side and 2563 cells. Hence, we neglect selection effects, redshift space distortions, and displacements. We note that those observational and cosmic evolution effects can be accounted for in subsequent Gibbs-sampling steps within the COSMIC BIRTH algorithm. We find that going from the usual second to fourth order in the leap-frog scheme shortens the burn-in phase by a factor of at least ∼30. This implies that 75–90 independent samples are obtained while the fastest second-order method converges. After convergence, the correlation lengths indicate an improvement factor of about 3.0 fewer gradient computations for meshes of 2563 cells. In the considered cosmological scenario, the traditional leap-frog scheme turns out to outperform higher order integration schemes only when considering lower dimensional problems, e.g. meshes with 643 cells. This gain in computational efficiency can help to go towards a full Bayesian analysis of the cosmological large-scale structure for upcoming galaxy surveys.


Sign in / Sign up

Export Citation Format

Share Document