scholarly journals Approximation methods in modified gravity models

2018 ◽  
Vol 27 (15) ◽  
pp. 1848004 ◽  
Author(s):  
Baojiu Li

We review some of the commonly used approximation methods to predict large-scale structure formation in modified gravity (MG) models for the cosmic acceleration. These methods are developed to speed up the often slow [Formula: see text]-body simulations in these models, or directly make approximate predictions of relevant physical quantities. In both cases, they are orders of magnitude more efficient than full simulations, making it possible to explore and delineate the large cosmological parameter space. On the other hand, there is a wide variation of their accuracies and ranges of validity, and these are usually not known a priori and must be validated against simulations. Therefore, a combination of full simulations and approximation methods will offer both efficiency and reliability. The approximation methods are also important from a theoretical point of view, since they can often offer useful insight into the nonlinear physics in MG models and inspire new algorithms for simulations.

2013 ◽  
Vol 22 (12) ◽  
pp. 1342006 ◽  
Author(s):  
SALVATORE CAPOZZIELLO ◽  
TIBERIU HARKO ◽  
FRANCISCO S. N. LOBO ◽  
GONZALO J. OLMO

The nonequivalence between the metric and Palatini formalisms of f(R) gravity is an intriguing feature of these theories. However, in the recently proposed hybrid metric-Palatini gravity, consisting of the superposition of the metric Einstein–Hilbert Lagrangian with an [Formula: see text] term constructed à la Palatini, the "true" gravitational field is described by the interpolation of these two nonequivalent approaches. The theory predicts the existence of a light long-range scalar field, which passes the local constraints and affects the galactic and cosmological dynamics. Thus, the theory opens new possibilities for a unified approach, in the same theoretical framework, to the problems of dark energy and dark matter, without distinguishing a priori matter and geometric sources, but taking their dynamics into account under the same standard.


Author(s):  
T. D. Kitching ◽  
F. Simpson ◽  
A. F. Heavens ◽  
A. N. Taylor

In this article, we review model selection predictions for modified gravity scenarios as an explanation for the observed acceleration of the expansion history of the Universe. We present analytical procedures for calculating expected Bayesian evidence values in two cases: (i) that modified gravity is a simple parametrized extension of general relativity (GR; two nested models), such that a Bayes' factor can be calculated, and (ii) that we have a class of non-nested models where a rank-ordering of evidence values is required. We show that, in the case of a minimal modified gravity parametrization, we can expect large area photometric and spectroscopic surveys, using three-dimensional cosmic shear and baryonic acoustic oscillations, to ‘decisively’ distinguish modified gravity models over GR (or vice versa), with odds of ≫1:100. It is apparent that the potential discovery space for modified gravity models is large, even in a simple extension to gravity models, where Newton's constant G is allowed to vary as a function of time and length scale. On the time and length scales where dark energy dominates, it is only through large-scale cosmological experiments that we can hope to understand the nature of gravity.


2017 ◽  
Vol 16 (4-5) ◽  
pp. 431-456 ◽  
Author(s):  
Q Leclère ◽  
A Pereira ◽  
C Bailly ◽  
J Antoni ◽  
C Picard

The problem of localizing and quantifying acoustic sources from a set of acoustic measurements has been addressed, in the last decades, by a huge number of scientists, from different communities (signal processing, mechanics, physics) and in various application fields (underwater, aero, or vibro acoustics). This led to the production of a substantial amount of literature on the subject, together with the development of many methods, specifically adapted and optimized for each configuration and application field, the variety and sophistication of proposed algorithms being sustained by the constant increase in computational and measurement capabilities. The counterpart of this prolific research is that it is quite tricky to get a clear global scheme of the state of the art. The aim of the present work is to make an attempt in this direction, by proposing a unified formalism for different well known imaging techniques, from identification methods (acoustic holography, equivalent sources, Bayesian focusing, Generalized inverse beamforming…) to beamforming deconvolution approaches (DAMAS, CLEAN). The hypothesis, advantages and pitfalls of each approach will be established from a theoretical point of view, with a particular effort in trying to separate differences in the problem definition (a priori information, main assumptions) and in the algorithms used to find the solution. Numerical simulations will be proposed for different source configurations (coherent/incoherent/extended/sparse distributions), and an experimental illustration on a supersonic jet will be finally discussed.


2019 ◽  
Vol 8 (2) ◽  
pp. 4309-4312

This study aims to reveal which factors and their indicators are significant (primary) for sustainable tourism from the point of view of the travellers themselves. The study was conducted in two stages and involved 415 (first phase) and 577 (second phase) respondents. The first stage was conducted online and personal communication, which allowed to question respondents in detail. The second stage was conducted only online using polls on Facebook, Twitter, Google Docs and email. Primary data were presented in the form of respondents' submissions, obtained as a result of observation through the distribution of questionnaires. Results were processed using a priori ranking technology - expert methods - using MS Excel for automatizing the process. The study revealed what factors generally accepted today and their indicators are significant (primary) from the point of view of the travellers themselves. The division of the empirical part of the study into two components allowed for a more detailed review of the opinions of the respondents and the identification of 9 leading indicators. This article draws attention to the fact that it is necessary to study phenomena not only from a theoretical point of view but also to test empirically, involving the participants in the process. The study will be useful for countries and regions that are committed to sustainable tourism. Technology, as a whole, can be used in any industry where people's opinion matters. The study is based on the theoretical basis of modern researchers and is supported by an experiment involving direct participants in the process (travellers), i.e. allows you to uncover sustainable tourism and factors affecting it from different angles.


1. The Facts .―In a solution of NaOH (or KOH) with nickel (perhaps with other) electrodes and at high current densities of the order of 1 ampere per cm. 2 of electrode the observed rates of evolution E 1 and E 2 of H 1 and H 2 at the cathode obey an equation of the type E 1 /E 2 = q D 1 /D 2 , (1) where D 1 and D 2 are the relative concentraction of the H 1 and H 2 atoms in the water as a whole. The coefficient q has been shown to be independent of D 1 /D 2 over a very wide range of relative concentrations. Under the conditions most favourable to separation, which appear to be those just specified, q can have a value as great as 6, or perhaps 7. With other metals than nickel for the electrodes and with lower current densities the factor q may fall to a value well below 2 or even to a value so nearly unity that no effective separation occurs. The dependence of q (if any) on the temperature of the solution is not known. 1.1. The Arguments .―The facts have already been commented on by Polanyi from the theoretical point of view. He has been led to conclude from them that the separation is to be attributed to a difference of " over potential" for the deposition of H 1 and H 2 ions on the cathode, and consequently that Gurney's theory of the over-potential of the hydrogen-electrode at low current densities (as measured, for example, by Bowden) must be discarded. These conclusions if correct and unavoidable are of the greatest importance. It is most general way possible, in order to see that no types of mechanism have been overlooked which could lead to the observed results. When this is done it is found that the mechanism discussed by Polanyi which refers the separation to differences of over-potential is not the only mechanism which must be held possible a priori . The observed separation could perfectly well occur by a mechanism consistent with Gurney's theory. It does not yet appear to be possible to decide confidently between the two possibilities on experimental grounds.


2019 ◽  
Vol 629 ◽  
pp. A46 ◽  
Author(s):  
Steffen Hagstotz ◽  
Max Gronke ◽  
David F. Mota ◽  
Marco Baldi

Searches for modified gravity in the large-scale structure try to detect the enhanced amplitude of density fluctuations caused by the fifth force present in many of these theories. Neutrinos, on the other hand, suppress structure growth below their free-streaming length. Both effects take place on comparable scales, and uncertainty in the neutrino mass leads to a degeneracy with modified gravity parameters for probes that are measuring the amplitude of the matter power spectrum. We explore the possibility to break the degeneracy between modified gravity and neutrino effects in the growth of structures by considering kinematic information related to either the growth rate on large scales or the virial velocities inside of collapsed structures. In order to study the degeneracy up to fully non-linear scales, we employ a suite of N-body simulations including both f(R) modified gravity and massive neutrinos. Our results indicate that velocity information provides an excellent tool to distinguish massive neutrinos from modified gravity. Models with different values of neutrino masses and modified gravity parameters possessing a comparable matter power spectrum at a given time have different growth rates. This leaves imprints in the velocity divergence, which is therefore better suited than the amplitude of density fluctuations to tell the models apart. In such models with a power spectrum comparable to ΛCDM today, the growth rate is strictly enhanced. We also find the velocity dispersion of virialised clusters to be well suited to constrain deviations from general relativity without being affected by the uncertainty in the sum of neutrino masses.


2019 ◽  
Vol 488 (2) ◽  
pp. 1987-2000 ◽  
Author(s):  
Jorge Enrique García-Farieta ◽  
Federico Marulli ◽  
Alfonso Veropalumbo ◽  
Lauro Moscardini ◽  
Rigoberto A Casas-Miranda ◽  
...  

Abstract Modified gravity and massive neutrino cosmologies are two of the most interesting scenarios that have been recently explored to account for possible observational deviations from the concordance Λ cold dark matter (ΛCDM) model. In this context, we investigated the large-scale structure of the Universe by exploiting the dustgrain-pathfinder simulations that implement, simultaneously, the effects of f(R) gravity and massive neutrinos. To study the possibility of breaking the degeneracy between these two effects, we analysed the redshift-space distortions in the clustering of dark matter haloes at different redshifts. Specifically, we focused on the monopole and quadrupole of the two-point correlation function, both in real and redshift space. The deviations with respect to ΛCDM model have been quantified in terms of the linear growth rate parameter. We found that redshift-space distortions provide a powerful probe to discriminate between ΛCDM and modified gravity models, especially at high redshifts (z ≳ 1), even in the presence of massive neutrinos.


Sign in / Sign up

Export Citation Format

Share Document