scholarly journals Cosmological impact of redshift drift measurements

2021 ◽  
Vol 508 (1) ◽  
pp. L53-L57
Author(s):  
J Esteves ◽  
C J A P Martins ◽  
B G Pereira ◽  
C S Alves

ABSTRACT The redshift drift is a model-independent probe of fundamental cosmology, but choosing a fiducial model one can also use it to constrain the model parameters. We compare the constraining power of redshift drift measurements by the Extremely Large Telescope (ELT), as studied by Liske et al., with that of two recently proposed alternatives: the cosmic accelerometer of Eikenberry et al., and the differential redshift drift of Cooke. We find that the cosmic accelerometer with a 6-yr baseline leads to weaker constraints than those of the ELT (by 60 per cent); however, with identical time baselines it outperforms the ELT by up to a factor of 6. The differential redshift drift always performs worse than the standard approach if the goal is to constrain the matter density; however, it can perform significantly better than it if the goal is to constrain the dark energy equation of state. Our results show that accurately measuring the redshift drift and using these measurements to constrain cosmological parameters are different merit functions: an experiment optimized for one of them will not be optimal for the other. These non-trivial trade-offs must be kept in mind as next-generation instruments enter their final design and construction phases.

2019 ◽  
Vol 488 (3) ◽  
pp. 3607-3624 ◽  
Author(s):  
C S Alves ◽  
A C O Leite ◽  
C J A P Martins ◽  
J G B Matos ◽  
T A Silva

ABSTRACT Cosmological observations usually map our present-day past light cone. However, it is also possible to compare different past light cones. This is the concept behind the redshift drift, a model-independent probe of fundamental cosmology. In simple physical terms, this effectively allows us to watch the Universe expand in real time. While current facilities only allow sensitivities several orders of magnitude worse than the expected signal, it should be possible to detect it with forthcoming ones. Here, we discuss the potential impact of measurements by three such facilities: the Extremely Large Telescope (the subject of most existing redshift drift forecasts), but also the Square Kilometre Array and intensity mapping experiments. For each of these we assume the measurement sensitivities estimated respectively in Liske et al. (2008), Klockner et al. (2015), and Yu, Zhang & Pen (2014). We focus on the role of these measurements in constraining dark energy scenarios, highlighting the fact that although on their own they yield comparatively weak constraints, they do probe regions of parameter space that are typically different from those probed by other experiments, as well as being redshift dependent. Specifically, we quantify how combinations of several redshift drift measurements at different redshifts, or combinations of redshift drift measurements with those from other canonical cosmological probes, can constrain some representative dark energy models. Our conclusion is that a model-independent mapping of the expansion of the universe from redshift z = 0 to z = 4 – a challenging but feasible goal for the next generation of astrophysical facilities – can have a significant impact on fundamental cosmology.


2021 ◽  
Vol 502 (4) ◽  
pp. 6140-6156 ◽  
Author(s):  
Narayan Khadka ◽  
Bharat Ratra

ABSTRACT We use six different cosmological models to study the recently released compilation of X-ray and UV flux measurements of 2038 quasars (QSOs) which span the redshift range 0.009 ≤ z ≤ 7.5413. We find, for the full QSO data set, that the parameters of the X-ray and UV luminosities LX−LUV relation used to standardize these QSOs depend on the cosmological model used to determine these parameters, i.e. it appears that the full QSO data set includes QSOs that are not standardized and so cannot be used for the purpose of constraining cosmological parameters. Subsets of the QSO data, restricted to redshifts z ≲ 1.5–1.7 obey the LX−LUV relation in a cosmological-model-independent manner, and so can be used to constrain cosmological parameters. The cosmological constraints from these lower z, smaller QSO data subsets are mostly consistent with, but significantly weaker than, those that follow from baryon acoustic oscillation and Hubble parameter measurements.


2020 ◽  
Vol 499 (4) ◽  
pp. 4905-4917
Author(s):  
S Contreras ◽  
R E Angulo ◽  
M Zennaro ◽  
G Aricò ◽  
M Pellejero-Ibañez

ABSTRACT Predicting the spatial distribution of objects as a function of cosmology is an essential ingredient for the exploitation of future galaxy surveys. In this paper, we show that a specially designed suite of gravity-only simulations together with cosmology-rescaling algorithms can provide the clustering of dark matter, haloes, and subhaloes with high precision. Specifically, with only three N-body simulations, we obtain the power spectrum of dark matter at z = 0 and 1 to better than 3 per cent precision for essentially all currently viable values of eight cosmological parameters, including massive neutrinos and dynamical dark energy, and over the whole range of scales explored, 0.03 < $k/{h}^{-1}\, {\rm Mpc}^{-1}$ < 5. This precision holds at the same level for mass-selected haloes and for subhaloes selected according to their peak maximum circular velocity. As an initial application of these predictions, we successfully constrain Ωm, σ8, and the scatter in subhalo-abundance-matching employing the projected correlation function of mock SDSS galaxies.


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


Author(s):  
Tomáš Gedeon ◽  
Lisa Davis ◽  
Katelyn Weber ◽  
Jennifer Thorenson

In this paper, we study the limitations imposed on the transcription process by the presence of short ubiquitous pauses and crowding. These effects are especially pronounced in highly transcribed genes such as ribosomal genes (rrn) in fast growing bacteria. Our model indicates that the quantity and duration of pauses reported for protein-coding genes is incompatible with the average elongation rate observed in rrn genes. When maximal elongation rate is high, pause-induced traffic jams occur, increasing promoter occlusion, thereby lowering the initiation rate. This lowers average transcription rate and increases average transcription time. Increasing maximal elongation rate in the model is insufficient to match the experimentally observed average elongation rate in rrn genes. This suggests that there may be rrn-specific modifications to RNAP, which then experience fewer pauses, or pauses of shorter duration than those in protein-coding genes. We identify model parameter triples (maximal elongation rate, mean pause duration time, number of pauses) which are compatible with experimentally observed elongation rates. Average transcription time and average transcription rate are the model outputs investigated as proxies for cell fitness. These fitness functions are optimized for different parameter choices, opening up a possibility of differential control of these aspects of the elongation process, with potential evolutionary consequences. As an example, a gene’s average transcription time may be crucial to fitness when the surrounding medium is prone to abrupt changes. This paper demonstrates that a functional relationship among the model parameters can be estimated using a standard statistical analysis, and this functional relationship describes the various trade-offs that must be made in order for the gene to control the elongation process and achieve a desired average transcription time. It also demonstrates the robustness of the system when a range of maximal elongation rates can be balanced with transcriptional pause data in order to maintain a desired fitness.


2020 ◽  
Vol 497 (1) ◽  
pp. 263-278 ◽  
Author(s):  
Narayan Khadka ◽  
Bharat Ratra

ABSTRACT Risaliti and Lusso have compiled X-ray and UV flux measurements of 1598 quasars (QSOs) in the redshift range 0.036 ≤ z ≤ 5.1003, part of which, z ∼ 2.4 − 5.1, is largely cosmologically unprobed. In this paper we use these QSO measurements, alone and in conjunction with baryon acoustic oscillation (BAO) and Hubble parameter [H(z)] measurements, to constrain cosmological parameters in six different cosmological models, each with two different Hubble constant priors. In most of these models, given the larger uncertainties, the QSO cosmological parameter constraints are mostly consistent with those from the BAO + H(z) data. A somewhat significant exception is the non-relativistic matter density parameter Ωm0 where QSO data favour Ωm0 ∼ 0.5 − 0.6 in most models. As a result, in joint analyses of QSO data with H(z) + BAO data the 1D Ωm0 distributions shift slightly towards larger values. A joint analysis of the QSO + BAO + H(z) data is consistent with the current standard model, spatially-flat ΛCDM, but mildly favours closed spatial hypersurfaces and dynamical dark energy. Since the higher Ωm0 values favoured by QSO data appear to be associated with the z ∼ 2 − 5 part of these data, and conflict somewhat with strong indications for Ωm0 ∼ 0.3 from most z < 2.5 data as well as from the cosmic microwave background anisotropy data at z ∼ 1100, in most models, the larger QSO data Ωm0 is possibly more indicative of an issue with the z ∼ 2 − 5 QSO data than of an inadequacy of the standard flat ΛCDM model.


2018 ◽  
Vol 7 (5) ◽  
pp. 120
Author(s):  
T. H. M. Abouelmagd

A new version of the Lomax model is introduced andstudied. The major justification for the practicality of the new model isbased on the wider use of the Lomax model. We are also motivated tointroduce the new model since the density of the new distribution exhibitsvarious important shapes such as the unimodal, the right skewed and the leftskewed. The new model can be viewed as a mixture of the exponentiated Lomaxdistribution. It can also be considered as a suitable model for fitting thesymmetric, left skewed, right skewed, and unimodal data sets. The maximumlikelihood estimation method is used to estimate the model parameters. Weprove empirically the importance and flexibility of the new model inmodeling two types of aircraft windshield lifetime data sets. The proposedlifetime model is much better than gamma Lomax, exponentiated Lomax, Lomaxand beta Lomax models so the new distribution is a good alternative to thesemodels in modeling aircraft windshield data.


Author(s):  
Robert Reischke ◽  
Vincent Desjacques ◽  
Saleem Zaroubi

Abstract We use analytic computations to predict the power spectrum as well as the bispectrum of Cosmic Infrared Background (CIB) anisotropies. Our approach is based on the halo model and takes into account the mean luminosity-mass relation. The model is used to forecast the possibility to simultaneously constrain cosmological, CIB and halo occupation distribution (HOD) parameters in the presence of foregrounds. For the analysis we use wavelengths in eight frequency channels between 200 and 900 GHz with survey specifications given by Planck and LiteBird. We explore the sensitivity to the model parameters up to multipoles of ℓ = 1000 using auto- and cross-correlations between the different frequency bands. With this setting, cosmological, HOD and CIB parameters can be constrained to a few percent. Galactic dust is modeled by a power law and the shot noise contribution as a frequency dependent amplitude which are marginalized over. We find that dust residuals in the CIB maps only marginally influence constraints on standard cosmological parameters. Furthermore, the bispectrum yields tighter constraints (by a factor four in 1σ errors) on almost all model parameters while the degeneracy directions are very similar to the ones of the power spectrum. The increase in sensitivity is most pronounced for the sum of the neutrino masses. Due to the similarity of degeneracies a combination of both analysis is not needed for most parameters. This, however, might be due to the simplified bias description generally adopted in such halo model approaches.


Author(s):  
Özgür Şimşek

The lexicographic decision rule is one of the simplest methods of choosing among decision alternatives. It is based on a simple priority ranking of the attributes available. According to the lexicographic decision rule, a decision alternative is better than another alternative if and only if it is better than the other alternative in the most important attribute on which the two alternatives differ. In other words, the lexicographic decision rule does not allow trade-offs among the various attributes. For example, if quality is considered to be more important than cost, no difference in price can compensate for a difference in quality: The lexicographic decision rule chooses the item with the best quality regardless of the cost. Over the years, the lexicographic decision rule has been compared to various statistical learning methods, including multiple linear regression, support vector machines, decision trees, and random forests. The results show that the lexicographic decision rule can sometimes compete remarkably well with more complex statistical methods, and even outperform them, despite its naively simple structure. These results have stimulated a rich scientific literature on why, and under what conditions, lexicographic decision rules yield accurate decisions. Due to the simplicity of its decision process, its fast execution time, and the robustness of its performance in various decision environments, the lexicographic decision rule is considered to be a plausible model of human decision making. In particular, the lexicographic decision rule is put forward as a model of how the human mind implements bounded rationality to make accurate decisions when information is scarce, time is short, and computational capacity is limited.


Sign in / Sign up

Export Citation Format

Share Document