scholarly journals Combining cosmological constraints from cluster counts and galaxy clustering

2014 ◽  
Vol 10 (S306) ◽  
pp. 216-218 ◽  
Author(s):  
F. Lacasa

AbstractPresent and future large scale surveys offer promising probes of cosmology. For example the Dark Energy Survey (DES) is forecast to detect ~300 millions galaxies and thousands clusters up to redshift ~1.3. I here show ongoing work to combine two probes of large scale structure : cluster number counts and galaxy 2-point function (in real or harmonic space). The halo model (coupled to a Halo Occupation Distribution) can be used to model the cross-covariance between these probes, and I introduce a diagrammatic method to compute easily the different terms involved. Furthermore, I compute the joint non-Gaussian likelihood, using the Gram-Charlier series. Then I show how to extend the methods of Bayesian hyperparameters to Poissonian distributions, in a first step to include them in this joint likelihood.

2021 ◽  
Vol 507 (4) ◽  
pp. 4852-4863
Author(s):  
Íñigo Zubeldia ◽  
Aditya Rotti ◽  
Jens Chluba ◽  
Richard Battye

Abstract Matched filters are routinely used in cosmology in order to detect galaxy clusters from mm observations through their thermal Sunyaev–Zeldovich (tSZ) signature. In addition, they naturally provide an observable, the detection signal-to-noise or significance, which can be used as a mass proxy in number counts analyses of tSZ-selected cluster samples. In this work, we show that this observable is, in general, non-Gaussian, and that it suffers from a positive bias, which we refer to as optimization bias. Both aspects arise from the fact that the signal-to-noise is constructed through an optimization operation on noisy data, and hold even if the cluster signal is modelled perfectly well, no foregrounds are present, and the noise is Gaussian. After reviewing the general mathematical formalism underlying matched filters, we study the statistics of the signal-to-noise with a set Monte Carlo mock observations, finding it to be well-described by a unit-variance Gaussian for signal-to-noise values of 6 and above, and quantify the magnitude of the optimization bias, for which we give an approximate expression that may be used in practice. We also consider the impact of the bias on the cluster number counts of Planck and the Simons Observatory (SO), finding it to be negligible for the former and potentially significant for the latter.


2020 ◽  
Vol 497 (3) ◽  
pp. 2699-2714
Author(s):  
Xiao Fang (方啸) ◽  
Tim Eifler ◽  
Elisabeth Krause

ABSTRACT Accurate covariance matrices for two-point functions are critical for inferring cosmological parameters in likelihood analyses of large-scale structure surveys. Among various approaches to obtaining the covariance, analytic computation is much faster and less noisy than estimation from data or simulations. However, the transform of covariances from Fourier space to real space involves integrals with two Bessel integrals, which are numerically slow and easily affected by numerical uncertainties. Inaccurate covariances may lead to significant errors in the inference of the cosmological parameters. In this paper, we introduce a 2D-FFTLog algorithm for efficient, accurate, and numerically stable computation of non-Gaussian real-space covariances for both 3D and projected statistics. The 2D-FFTLog algorithm is easily extended to perform real-space bin-averaging. We apply the algorithm to the covariances for galaxy clustering and weak lensing for a Dark Energy Survey Year 3-like and a Rubin Observatory’s Legacy Survey of Space and Time Year 1-like survey, and demonstrate that for both surveys, our algorithm can produce numerically stable angular bin-averaged covariances with the flat sky approximation, which are sufficiently accurate for inferring cosmological parameters. The code CosmoCov for computing the real-space covariances with or without the flat-sky approximation is released along with this paper.


2018 ◽  
Vol 614 ◽  
pp. A13 ◽  
Author(s):  
Laura Salvati ◽  
Marian Douspis ◽  
Nabila Aghanim

The thermal Sunyaev-Zel’dovich (tSZ) effect is one of the recent probes of cosmology and large-scale structures. We update constraints on cosmological parameters from galaxy clusters observed by the Planck satellite in a first attempt to combine cluster number counts and the power spectrum of hot gas; we used a new value of the optical depth and, at the same time, sampling on cosmological and scaling-relation parameters. We find that in the ΛCDM model, the addition of a tSZ power spectrum provides small improvements with respect to number counts alone, leading to the 68% c.l. constraints Ωm = 0.32  ± 0.02, σ8 = 0.76  ± 0.03, and σ8(Ωm/0.3)1/3 = 0.78  ± 0.03 and lowering the discrepancy with results for cosmic microwave background (CMB) primary anisotropies (updated with the new value of τ) to ≃1.8σ on σ8. We analysed extensions to the standard model, considering the effect of massive neutrinos and varying the equation of state parameter for dark energy. In the first case, we find that the addition of the tSZ power spectrum helps in improving cosmological constraints with respect to number count alone results, leading to the 95% upper limit ∑ mν < 1.88 eV. For the varying dark energy equation of state scenario, we find no important improvements when adding tSZ power spectrum, but still the combination of tSZ probes is able to provide constraints, producing w = −1.0 ± 0.2. In all cosmological scenarios, the mass bias to reconcile CMB and tSZ probes remains low at (1 − b) ≲ 0.67 as compared to estimates from weak lensing and X-ray mass estimate comparisons or numerical simulations.


2015 ◽  
Vol 30 (22) ◽  
pp. 1540031 ◽  
Author(s):  
Spyros Basilakos

We investigate the dynamics of the Friedmann–Lemaître–Robertson–Walker (FLRW) flat cosmological models in which the vacuum energy varies with redshift. A particularly well-motivated model of this type is the so-called quantum field vacuum, in which both kind of terms [Formula: see text] and constant appear in the effective dark energy (DE) density affecting the evolution of the main cosmological functions at the background and perturbation levels. Specifically, it turns out that the functional form of the quantum vacuum endows the vacuum energy of a mild dynamical evolution which could be observed nowadays and appears as dynamical DE. Interestingly, the low-energy behavior is very close to the usual Lambda cold dark matter (ΛCDM) model, but it is by no means identical. Finally, within the framework of the quantum field vacuum we generalize the large scale structure properties, namely growth of matter perturbations, cluster number counts and spherical collapse model.


2021 ◽  
Vol 502 (3) ◽  
pp. 4093-4111
Author(s):  
Chun-Hao To ◽  
Elisabeth Krause ◽  
Eduardo Rozo ◽  
Hao-Yi Wu ◽  
Daniel Gruen ◽  
...  

ABSTRACT We present a method of combining cluster abundances and large-scale two-point correlations, namely galaxy clustering, galaxy–cluster cross-correlations, cluster autocorrelations, and cluster lensing. This data vector yields comparable cosmological constraints to traditional analyses that rely on small-scale cluster lensing for mass calibration. We use cosmological survey simulations designed to resemble the Dark Energy Survey Year 1 (DES-Y1) data to validate the analytical covariance matrix and the parameter inferences. The posterior distribution from the analysis of simulations is statistically consistent with the absence of systematic biases detectable at the precision of the DES-Y1 experiment. We compare the χ2 values in simulations to their expectation and find no significant difference. The robustness of our results against a variety of systematic effects is verified using a simulated likelihood analysis of DES-Y1-like data vectors. This work presents the first-ever end-to-end validation of a cluster abundance cosmological analysis on galaxy catalogue level simulations.


2019 ◽  
Vol 214 ◽  
pp. 04033
Author(s):  
Hervé Rousseau ◽  
Belinda Chan Kwok Cheong ◽  
Cristian Contescu ◽  
Xavier Espinal Curull ◽  
Jan Iven ◽  
...  

The CERN IT Storage group operates multiple distributed storage systems and is responsible for the support of the infrastructure to accommodate all CERN storage requirements, from the physics data generated by LHC and non-LHC experiments to the personnel users' files. EOS is now the key component of the CERN Storage strategy. It allows to operate at high incoming throughput for experiment data-taking while running concurrent complex production work-loads. This high-performance distributed storage provides now more than 250PB of raw disks and it is the key component behind the success of CERNBox, the CERN cloud synchronisation service which allows syncing and sharing files on all major mobile and desktop platforms to provide offline availability to any data stored in the EOS infrastructure. CERNBox recorded an exponential growth in the last couple of year in terms of files and data stored thanks to its increasing popularity inside CERN users community and thanks to its integration with a multitude of other CERN services (Batch, SWAN, Microsoft Office). In parallel CASTOR is being simplified and transitioning from an HSM into an archival system, focusing mainly in the long-term data recording of the primary data from the detectors, preparing the road to the next-generation tape archival system, CTA. The storage services at CERN cover as well the needs of the rest of our community: Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy home directory filesystem services and its ongoing phase-out and CVMFS for software distribution. In this paper we will summarise our experience in supporting all our distributed storage system and the ongoing work in evolving our infrastructure, testing very-dense storage building block (nodes with more than 1PB of raw space) for the challenges waiting ahead.


2020 ◽  
Vol 493 (4) ◽  
pp. 5662-5679 ◽  
Author(s):  
B Mawdsley ◽  
D Bacon ◽  
C Chang ◽  
P Melchior ◽  
E Rozo ◽  
...  

ABSTRACT We present new wide-field weak lensing mass maps for the Year 1 Dark Energy Survey (DES) data, generated via a forward fitting approach. This method of producing maps does not impose any prior constraints on the mass distribution to be reconstructed. The technique is found to improve the map reconstruction on the edges of the field compared to the conventional Kaiser–Squires method, which applies a direct inversion on the data; our approach is in good agreement with the previous direct approach in the central regions of the footprint. The mapping technique is assessed and verified with tests on simulations; together with the Kaiser–Squires method, the technique is then applied to data from the DES Year 1 data and the differences between the two methods are compared. We also produce the first DES measurements of the convergence Minkowski functionals and compare them to those measured in simulations.


1995 ◽  
Vol 408 ◽  
Author(s):  
D. J. Sullivan ◽  
E. L. Briggs ◽  
C. J. Brabec ◽  
J. Bernholc

AbstractWe have developed a set of techniques for performing large scale ab initio calculations using multigrid accelerations and a real-space grid as a basis. The multigrid methods permit efficient calculations on ill-conditioned systems with long length scales or high energy cutoffs. We discuss the design of pseudopotentials for real-space grids, and the computation of ionic forces. The technique has been applied to several systems, including an isolated C60 molecule, the wurtzite phase of GaN, a 64-atom cell of GaN with the Ga d-states in valence, and a 443-atom protein. The method has been implemented on both vector and parallel architectures. We also discuss ongoing work on O(N) implementations and solvated biomolecules.


2018 ◽  
Vol 611 ◽  
pp. A83 ◽  
Author(s):  
Fabien Lacasa ◽  
Marcos Lima ◽  
Michel Aguena

Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30–35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.


Author(s):  
Ming Cao ◽  
Qinke Peng ◽  
Ze-Gang Wei ◽  
Fei Liu ◽  
Yi-Fan Hou

The development of high-throughput technologies has produced increasing amounts of sequence data and an increasing need for efficient clustering algorithms that can process massive volumes of sequencing data for downstream analysis. Heuristic clustering methods are widely applied for sequence clustering because of their low computational complexity. Although numerous heuristic clustering methods have been developed, they suffer from two limitations: overestimation of inferred clusters and low clustering sensitivity. To address these issues, we present a new sequence clustering method (edClust) based on Edlib, a C/C[Formula: see text] library for fast, exact semi-global sequence alignment to group similar sequences. The new method edClust was tested on three large-scale sequence databases, and we compared edClust to several classic heuristic clustering methods, such as UCLUST, CD-HIT, and VSEARCH. Evaluations based on the metrics of cluster number and seed sensitivity (SS) demonstrate that edClust can produce fewer clusters than other methods and that its SS is higher than that of other methods. The source codes of edClust are available from https://github.com/zhang134/EdClust.git under the GNU GPL license.


Sign in / Sign up

Export Citation Format

Share Document