scholarly journals Towards testing the theory of gravity with DESI: summary statistics, model predictions and future simulation requirements

2021 ◽  
Vol 2021 (11) ◽  
pp. 050
Author(s):  
Shadab Alam ◽  
Christian Arnold ◽  
Alejandro Aviles ◽  
Rachel Bean ◽  
Yan-Chuan Cai ◽  
...  

Abstract Shortly after its discovery, General Relativity (GR) was applied to predict the behavior of our Universe on the largest scales, and later became the foundation of modern cosmology. Its validity has been verified on a range of scales and environments from the Solar system to merging black holes. However, experimental confirmations of GR on cosmological scales have so far lacked the accuracy one would hope for — its applications on those scales being largely based on extrapolation and its validity there sometimes questioned in the shadow of the discovery of the unexpected cosmic acceleration. Future astronomical instruments surveying the distribution and evolution of galaxies over substantial portions of the observable Universe, such as the Dark Energy Spectroscopic Instrument (DESI), will be able to measure the fingerprints of gravity and their statistical power will allow strong constraints on alternatives to GR. In this paper, based on a set of N-body simulations and mock galaxy catalogs, we study the predictions of a number of traditional and novel summary statistics beyond linear redshift distortions in two well-studied modified gravity models — chameleon f(R) gravity and a braneworld model — and the potential of testing these deviations from GR using DESI. These summary statistics employ a wide array of statistical properties of the galaxy and the underlying dark matter field, including two-point and higher-order statistics, environmental dependence, redshift space distortions and weak lensing. We find that they hold promising power for testing GR to unprecedented precision. The major future challenge is to make realistic, simulation-based mock galaxy catalogs for both GR and alternative models to fully exploit the statistic power of the DESI survey (by matching the volumes and galaxy number densities of the mocks to those in the real survey) and to better understand the impact of key systematic effects. Using these, we identify future simulation and analysis needs for gravity tests using DESI.

2020 ◽  
Vol 643 ◽  
pp. A70 ◽  
Author(s):  
I. Tutusaus ◽  
M. Martinelli ◽  
V. F. Cardone ◽  
S. Camera ◽  
S. Yahia-Cherif ◽  
...  

Context. The data from the Euclid mission will enable the measurement of the angular positions and weak lensing shapes of over a billion galaxies, with their photometric redshifts obtained together with ground-based observations. This large dataset, with well-controlled systematic effects, will allow for cosmological analyses using the angular clustering of galaxies (GCph) and cosmic shear (WL). For Euclid, these two cosmological probes will not be independent because they will probe the same volume of the Universe. The cross-correlation (XC) between these probes can tighten constraints and is therefore important to quantify their impact for Euclid. Aims. In this study, we therefore extend the recently published Euclid forecasts by carefully quantifying the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim to decipher the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias, intrinsic alignments (IAs), and knowledge of the redshift distributions. Methods. We follow the Fisher matrix formalism and make use of previously validated codes. We also investigate a different galaxy bias model, which was obtained from the Flagship simulation, and additional photometric-redshift uncertainties; we also elucidate the impact of including the XC terms on constraining these latter. Results. Starting with a baseline model, we show that the XC terms reduce the uncertainties on galaxy bias by ∼17% and the uncertainties on IA by a factor of about four. The XC terms also help in constraining the γ parameter for minimal modified gravity models. Concerning galaxy bias, we observe that the role of the XC terms on the final parameter constraints is qualitatively the same irrespective of the specific galaxy-bias model used. For IA, we show that the XC terms can help in distinguishing between different models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. Finally, we show that the XC terms can lead to a better determination of the mean of the photometric galaxy distributions. Conclusions. We find that the XC between GCph and WL within the Euclid survey is necessary to extract the full information content from the data in future analyses. These terms help in better constraining the cosmological model, and also lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC significantly helps in constraining the mean of the photometric-redshift distributions, but, at the same time, it requires more precise knowledge of this mean with respect to single probes in order not to degrade the final “figure of merit”.


2020 ◽  
Vol 497 (1) ◽  
pp. 210-228
Author(s):  
J Sánchez ◽  
C W Walter ◽  
H Awan ◽  
J Chiang ◽  
S F Daniel ◽  
...  

ABSTRACT Data Challenge 1 (DC1) is the first synthetic data set produced by the Rubin Observatory Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration (DESC). DC1 is designed to develop and validate data reduction and analysis and to study the impact of systematic effects that will affect the LSST data set. DC1 is comprised of r-band observations of 40 deg2 to 10 yr LSST depth. We present each stage of the simulation and analysis process: (a) generation, by synthesizing sources from cosmological N-body simulations in individual sensor-visit images with different observing conditions; (b) reduction using a development version of the LSST Science Pipelines; and (c) matching to the input cosmological catalogue for validation and testing. We verify that testable LSST requirements pass within the fidelity of DC1. We establish a selection procedure that produces a sufficiently clean extragalactic sample for clustering analyses and we discuss residual sample contamination, including contributions from inefficiency in star–galaxy separation and imperfect deblending. We compute the galaxy power spectrum on the simulated field and conclude that: (i) survey properties have an impact of 50 per cent of the statistical uncertainty for the scales and models used in DC1; (ii) a selection to eliminate artefacts in the catalogues is necessary to avoid biases in the measured clustering; and (iii) the presence of bright objects has a significant impact (2σ–6σ) in the estimated power spectra at small scales (ℓ > 1200), highlighting the impact of blending in studies at small angular scales in LSST.


2019 ◽  
Author(s):  
Curtis David Von Gunten ◽  
Bruce D Bartholow

A primary psychometric concern with laboratory-based inhibition tasks has been their reliability. However, a reliable measure may not be necessary or sufficient for reliably detecting effects (statistical power). The current study used a bootstrap sampling approach to systematically examine how the number of participants, the number of trials, the magnitude of an effect, and study design (between- vs. within-subject) jointly contribute to power in five commonly used inhibition tasks. The results demonstrate the shortcomings of relying solely on measurement reliability when determining the number of trials to use in an inhibition task: high internal reliability can be accompanied with low power and low reliability can be accompanied with high power. For instance, adding additional trials once sufficient reliability has been reached can result in large gains in power. The dissociation between reliability and power was particularly apparent in between-subject designs where the number of participants contributed greatly to power but little to reliability, and where the number of trials contributed greatly to reliability but only modestly (depending on the task) to power. For between-subject designs, the probability of detecting small-to-medium-sized effects with 150 participants (total) was generally less than 55%. However, effect size was positively associated with number of trials. Thus, researchers have some control over effect size and this needs to be considered when conducting power analyses using analytic methods that take such effect sizes as an argument. Results are discussed in the context of recent claims regarding the role of inhibition tasks in experimental and individual difference designs.


2018 ◽  
Vol 613 ◽  
pp. A15 ◽  
Author(s):  
Patrick Simon ◽  
Stefan Hilbert

Galaxies are biased tracers of the matter density on cosmological scales. For future tests of galaxy models, we refine and assess a method to measure galaxy biasing as a function of physical scalekwith weak gravitational lensing. This method enables us to reconstruct the galaxy bias factorb(k) as well as the galaxy-matter correlationr(k) on spatial scales between 0.01hMpc−1≲k≲ 10hMpc−1for redshift-binned lens galaxies below redshiftz≲ 0.6. In the refinement, we account for an intrinsic alignment of source ellipticities, and we correct for the magnification bias of the lens galaxies, relevant for the galaxy-galaxy lensing signal, to improve the accuracy of the reconstructedr(k). For simulated data, the reconstructions achieve an accuracy of 3–7% (68% confidence level) over the abovek-range for a survey area and a typical depth of contemporary ground-based surveys. Realistically the accuracy is, however, probably reduced to about 10–15%, mainly by systematic uncertainties in the assumed intrinsic source alignment, the fiducial cosmology, and the redshift distributions of lens and source galaxies (in that order). Furthermore, our reconstruction technique employs physical templates forb(k) andr(k) that elucidate the impact of central galaxies and the halo-occupation statistics of satellite galaxies on the scale-dependence of galaxy bias, which we discuss in the paper. In a first demonstration, we apply this method to previous measurements in the Garching-Bonn Deep Survey and give a physical interpretation of the lens population.


2020 ◽  
Vol 15 (S359) ◽  
pp. 188-189
Author(s):  
Daniela Hiromi Okido ◽  
Cristina Furlanetto ◽  
Marina Trevisan ◽  
Mônica Tergolina

AbstractGalaxy groups offer an important perspective on how the large-scale structure of the Universe has formed and evolved, being great laboratories to study the impact of the environment on the evolution of galaxies. We aim to investigate the properties of a galaxy group that is gravitationally lensing HELMS18, a submillimeter galaxy at z = 2.39. We obtained multi-object spectroscopy data using Gemini-GMOS to investigate the stellar kinematics of the central galaxies, determine its members and obtain the mass, radius and the numerical density profile of this group. Our final goal is to build a complete description of this galaxy group. In this work we present an analysis of its two central galaxies: one is an active galaxy with z = 0.59852 ± 0.00007, while the other is a passive galaxy with z = 0.6027 ± 0.0002. Furthermore, the difference between the redshifts obtained using emission and absorption lines indicates an outflow of gas with velocity v = 278.0 ± 34.3 km/s relative to the galaxy.


Author(s):  
E Gaztanaga ◽  
S J Schmidt ◽  
M D Schneider ◽  
J A Tyson

Abstract We test the impact of some systematic errors in weak lensing magnification measurements with the COSMOS 30-band photo-z Survey flux limited to Iauto < 25.0 using correlations of both source galaxy counts and magnitudes. Systematic obscuration effects are measured by comparing counts and magnification correlations. We use the ACS-HST catalogs to identify potential blending objects (close pairs) and perform the magnification analyses with and without blended objects. We find that blending effects start to be important (∼ 0.04 mag obscuration) at angular scales smaller than 0.1 arcmin. Extinction and other systematic obscuration effects can be as large as 0.10 mag (U-band) but are typically smaller than 0.02 mag depending on the band. After applying these corrections, we measure a 3.9σ magnification signal that is consistent for both counts and magnitudes. The corresponding projected mass profiles of galaxies at redshift z ≃ 0.6 (MI ≃ −21) is Σ = 25 ± 6M⊙h3/pc2 at 0.1 Mpc/h, consistent with NFW type profile with M200 ≃ 2 × 1012M⊙h/pc2. Tangential shear and flux-size magnification over the same lenses show similar mass profiles. We conclude that magnification from counts and fluxes using photometric redshifts has the potential to provide complementary weak lensing information in future wide field surveys once we carefully take into account systematic effects, such as obscuration and blending.


2002 ◽  
Vol 181 (1) ◽  
pp. 17-21 ◽  
Author(s):  
S. J. Ziguras ◽  
G. W. Stuart ◽  
A. C. Jackson

BackgroundEvidence on the impact of case management is contradictory.AimsTo discuss two different systematic reviews (one conducted by the authors and one conducted through the Cochrane collaboration) that came to contradictory conclusions about the impact of case management in mental health services.MethodWe summarised the findings of the two reviews with respect to case management effectiveness, examined key methodological differences between the two approaches and discuss the impact of these on the validity of the results.ResultsThe differences in conclusions between the two reviews result from the differences in inclusion criteria, namely non-randomised trials, data from unpublished scales and data from variables with skewed distributions. The theoretical and empirical effects of these are discussed.ConclusionsSystematic reviewers may face a trade-off between the application of strict criteria for the inclusion of studies and the amount of data available for analysis and hence statistical power. The available research suggests that case management is generally effective.


2016 ◽  
Vol 458 (4) ◽  
pp. 3478-3478 ◽  
Author(s):  
Alice Mortlock ◽  
Christopher. J. Conselice ◽  
William G. Hartley ◽  
Ken Duncan ◽  
Caterina Lani ◽  
...  

2021 ◽  
Vol 504 (2) ◽  
pp. 2224-2234
Author(s):  
Nan Li ◽  
Christoph Becker ◽  
Simon Dye

ABSTRACT Measurements of the Hubble–Lemaitre constant from early- and local-Universe observations show a significant discrepancy. In an attempt to understand the origin of this mismatch, independent techniques to measure H0 are required. One such technique, strong lensing time delays, is set to become a leading contender amongst the myriad methods due to forthcoming large strong lens samples. It is therefore critical to understand the systematic effects inherent in this method. In this paper, we quantify the influence of additional structures along the line of sight by adopting realistic light-cones derived from the cosmoDC2 semi-analytical extragalactic catalogue. Using multiple-lens plane ray tracing to create a set of simulated strong lensing systems, we have investigated the impact of line-of-sight structures on time-delay measurements and in turn, on the inferred value of H0. We have also tested the reliability of existing procedures for correcting for line-of-sight effects. We find that if the integrated contribution of the line-of-sight structures is close to a uniform mass sheet, the bias in H0 can be adequately corrected by including a constant external convergence κext in the lens model. However, for realistic line-of-sight structures comprising many galaxies at different redshifts, this simple correction overestimates the bias by an amount that depends linearly on the median external convergence. We therefore conclude that lens modelling must incorporate multiple-lens planes to account for line-of-sight structures for accurate and precise inference of H0.


Sign in / Sign up

Export Citation Format

Share Document