scholarly journals On Safety of Unary and Non-unary IFP-operators

2018 ◽  
Vol 25 (5) ◽  
pp. 525-533
Author(s):  
Sergey Dudakov

In this paper, we investigate the safety of unary inflationary fixed point operators (IFPoperators). The safety is a computability in finitely many steps. IFP-operators exactly correspond to recursive SQL-queries hence this problem has a value for database theory. The problem appears from the fact that if recursive queries contain universe functions and relations, then its execution can fall into an infinite loop. Moreover, universal computational devices (Turing machines et al.) can be modelled by such queries. Hence the problem of the finite computability for such queries is undecidable. In our previous works we established some properties of a universe which imply the finite computability of all IFP-operators in the universe. Here, we investigate a connection between an arity of IFP-operators and their safety. We prove that some results for general IFP-operators don’t hold for unary ones. We construct a universe where all unary unnesed IFP-operators are safe. But in this universe there exist unsafe nested unary IFP-operators and unsafe unnested binary IFP-operators. This differs from general IFP-operators because in general case the safety of all unnesed IFP-operators implies the safety of all IFP-operators. Also there exist elementary equivalent universes where some unary unnesed IFPoperators become unsafe. For general IFP-operators it is also impossible.

2020 ◽  
Vol 2020 (9) ◽  
Author(s):  
Paul Frederik Depta ◽  
Andreas Halsch ◽  
Janine Hütig ◽  
Sebastian Mendizabal ◽  
Owe Philipsen

Abstract Thermal leptogenesis, in the framework of the standard model with three additional heavy Majorana neutrinos, provides an attractive scenario to explain the observed baryon asymmetry in the universe. It is based on the out-of-equilibrium decay of Majorana neutrinos in a thermal bath of standard model particles, which in a fully quantum field theoretical formalism is obtained by solving Kadanoff-Baym equations. So far, the leading two-loop contributions from leptons and Higgs particles are included, but not yet gauge corrections. These enter at three-loop level but, in certain kinematical regimes, require a resummation to infinite loop order for a result to leading order in the gauge coupling. In this work, we apply such a resummation to the calculation of the lepton number density. The full result for the simplest “vanilla leptogenesis” scenario is by $$ \mathcal{O} $$ O (1) increased compared to that of quantum Boltzmann equations, and for the first time permits an estimate of all theoretical uncertainties. This step completes the quantum theory of leptogenesis and forms the basis for quantitative evaluations, as well as extensions to other scenarios.


1993 ◽  
Vol 58 (1) ◽  
pp. 291-313 ◽  
Author(s):  
Robert S. Lubarsky

Inductive definability has been studied for some time already. Nonetheless, there are some simple questions that seem to have been overlooked. In particular, there is the problem of the expressibility of the μ-calculus.The μ-calculus originated with Scott and DeBakker [SD] and was developed by Hitchcock and Park [HP], Park [Pa], Kozen [K], and others. It is a language for including inductive definitions with first-order logic. One can think of a formula in first-order logic (with one free variable) as defining a subset of the universe, the set of elements that make it true. Then “and” corresponds to intersection, “or” to union, and “not” to complementation. Viewing the standard connectives as operations on sets, there is no reason not to include one more: least fixed point.There are certain features of the μ-calculus coming from its being a language that make it interesting. A natural class of inductive definitions are those that are monotone: if X ⊃ Y then Γ (X) ⊃ Γ (Y) (where Γ (X) is the result of one application of the operator Γ to the set X). When studying monotonic operations in the context of a language, one would need a syntactic guarantor of monotonicity. This is provided by the notion of positivity. An occurrence of a set variable S is positive if that occurrence is in the scopes of exactly an even number of negations (the antecedent of a conditional counting as a negation). S is positive in a formula ϕ if each occurrence of S is positive. Intuitively, the formula can ask whether x ∊ S, but not whether x ∉ S. Such a ϕ can be considered an inductive definition: Γ (X) = {x ∣ ϕ(x), where the variable S is interpreted as X}. Moreover, this induction is monotone: as X gets bigger, ϕ can become only more true, by the positivity of S in ϕ. So in the μ-calculus, a formula is well formed by definition only if all of its inductive definitions are positive, in order to guarantee that all inductive definitions are monotone.


Both Big-Bang and stellar nucleosynthesis have outcomes related to the density of baryonic matter, but whereas in the first case there is a standard model that makes very precise predictions of light element abundances as a function of the mean density of baryons in the Universe, in the second case various uncertainties permit only very limited conclusions to be drawn. As far as Big-Bang synthesis and the light elements are concerned, existing results on D, 3 He and 7 Li indicate a value of Ω N h 2 0 greater than 0.01 and less than 0.025, where Ω N is the ratio of baryonic density to the closure density and h 0 is the Hubble constant in units of 100 km s -1 Mpc -1 ; probably 0.5 < h 0 < 1. New results on the primordial helium abundance give a still tighter upper limit to Ω N ,Ω N h 2 0 < 0.013, which when compared with redshift surveys giving Ω > 0.05 implies that the observed matter can all be baryonic only if the various uncertainties are stretched to their limits.


2019 ◽  
Vol 79 (11) ◽  
Author(s):  
Jaime Román-Garza ◽  
Tomás Verdugo ◽  
Juan Magaña ◽  
Verónica Motta

Abstract In this paper, we propose a new phenomenological two parameter parameterization of q(z) to constrain barotropic dark energy models by considering a spatially flat Universe, neglecting the radiation component, and reconstructing the effective equation of state (EoS). This two free-parameter EoS reconstruction shows a non-monotonic behavior, pointing to a more general fitting for the scalar field models, like thawing and freezing models. We constrain the q(z) free parameters using the observational data of the Hubble parameter obtained from cosmic chronometers, the joint-light-analysis Type Ia Supernovae (SNIa) sample, the Pantheon (SNIa) sample, and a joint analysis from these data. We obtain, for the joint analysis with the Pantheon (SNIa) sample a value of q(z) today, $$q_0=-0.51\begin{array}{c} +0.09 \\ -0.10 \end{array}$$q0=-0.51+0.09-0.10, and a transition redshift, $$z_t=0.65\begin{array}{c} +0.19 \\ -0.17 \end{array}$$zt=0.65+0.19-0.17 (when the Universe change from an decelerated phase to an accelerated one). The effective EoS reconstruction and the $$\omega '$$ω′–$$\omega $$ω plane analysis point towards a transition over the phantom divide, i.e. $$\omega =-1$$ω=-1, which is consistent with a non parametric EoS reconstruction reported by other authors.


Apeiron ◽  
2013 ◽  
Vol 46 (3) ◽  
pp. 244-269
Author(s):  
Ernesto Paparazzo

Abstract The present article investigates a passage of the Timaeus in which Plato describes the construction of the pyramid. Scholars traditionally interpreted it as involving that the solid angle at the vertex of the pyramid is equal, or nearly so, to 180°, a value which they took to be that of the most obtuse of plane angles. I argue that this interpretation is not warranted, because it conflicts with both the geometrical principles which Plato in all probability knew and the context of the Timaeus. As well as recalling the definitions and properties of plane angles and solid angles in Euclid’s Elements, I offer an alternative interpretation, which in my opinion improves the comprehension of the passage, and makes it consistent with both the immediate and wider context of the Timaeus. I suggest that the passage marks a transition from plane geometry to solid geometry within Plato’s account of the universe.


2012 ◽  
Vol 8 (S289) ◽  
pp. 3-9 ◽  
Author(s):  
Wendy L. Freedman

AbstractTwenty years ago, there was disagreement at a level of a factor of two as regards the value of the expansion rate of the Universe. Ten years ago, a value that was good to 10% was established using the Hubble Space Telescope (HST), completing one of the primary missions that NASA designed and built the HST to undertake. Today, after confronting most of the systematic uncertainties listed at the end of the Key Project, we are looking at a value of the Hubble constant that is plausibly known to within 3%. In the near future, an independently determined value of H0 good to 1% is desirable to constrain the extraction of other cosmological parameters from the power spectrum of the cosmic microwave background in defining a concordance model of cosmology. We review recent progress and assess the future prospects for those tighter constraints on the Hubble constant, which were unimaginable just a decade ago.


2014 ◽  
Vol 5 (9) ◽  
pp. 17
Author(s):  
Everton Puhl Maciel

RESUMO: Esse trabalho tem por objetivo analisar o construtivismo político da Terceira Conferência da obra Liberalismo Político, de John Rawls. Especificamente, vamos tentar compreender como, limitando o universo de construção aos parâmetros estabelecidos pelo discurso político, podemos estender o alcance dos princípios acordados na posição original para uma comunidade muito mais ampla frente às doutrinas morais abrangentes. Demonstraremos o construtivismo político coerentista não em oposição ao intuicionismo moral utilitarista nem ao construtivismo moral kantiano, mas como capaz de absorver modelos com esse grau de razoabilidade. Isso será disposto através de uma justificação pública tanto do conteúdo quanto da forma do modelo adotado. Assim, o consenso sobreposto apresentado por Rawls é responsável direto pelo resultado democrático que esperamos de uma sociedade onde a publicidade ocupa espaço enquanto fato e possui um valor aceito como legítimo. Nosso método de trabalho envolve uma leitura analítica do texto e de comentadores pertinentes ao assunto proposto.ABSTRACT: This study aims to objective analyze the political constructivism of the Third Conference of the work Political Liberalism, by John Rawls. Specifically, we understand how limiting the universe of construction to the parameters by the political discourse, we can extend the reach of the principles agreed in the original position to a much larger universe in the face of comprehensive moral doctrines. We demonstrate what political constructivism no consistent as opposed to utilitarian moral intuitionism or the Kantian moral constructivism, but as capable of absorbing models with this degree of reasonableness. This will be provided through a public justification of both the content and form of the model adopted. Thus, the overlapping consensus presented by Rawls is directly responsible for the democratic results we expect from a society where democracy takes up space as a fact and has a value accepted as legitimate. KEYWORDS: Constructivism; justification; liberalism.


2014 ◽  
Vol 11 (S308) ◽  
pp. 295-298 ◽  
Author(s):  
Ryan C. Keenan ◽  
Amy J. Barger ◽  
Lennox L. Cowie

AbstractOur recent estimates of galaxy counts and the luminosity density in the near-infrared (Keenan et al. 2010, 2012) indicated that the local universe may be under-dense on radial scales of several hundred megaparsecs. Such a large-scale local under-density could introduce significant biases in the measurement and interpretation of cosmological observables, such as the inferred effects of dark energy on the rate of expansion. In Keenan et al. (2013), we measured the K-band luminosity density as a function of distance from us to test for such a local under-density. We made this measurement over the redshift range 0.01 < z < 0.2 (radial distances D ~ 50 - 800 h70−1 Mpc). We found that the shape of the K-band luminosity function is relatively constant as a function of distance and environment. We derive a local (z < 0.07, D < 300 h70−1 Mpc) K-band luminosity density that agrees well with previously published studies. At z > 0.07, we measure an increasing luminosity density that by z ~ 0.1 rises to a value of ~ 1.5 times higher than that measured locally. This implies that the stellar mass density follows a similar trend. Assuming that the underlying dark matter distribution is traced by this luminous matter, this suggests that the local mass density may be lower than the global mass density of the universe at an amplitude and on a scale that is sufficient to introduce significant biases into the measurement of basic cosmological observables. At least one study has shown that an under-density of roughly this amplitude and scale could resolve the apparent tension between direct local measurements of the Hubble constant and those inferred by Planck team. Other theoretical studies have concluded that such an under-density could account for what looks like an accelerating expansion, even when no dark energy is present.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
David J. E. Marsh ◽  
Wen Yin

Abstract An axion-like particle (ALP) with mass mϕ ∼ 10−15 eV oscillates with frequency ∼1 Hz. This mass scale lies in an open window of astrophysical constraints, and appears naturally as a consequence of grand unification (GUT) in string/M-theory. However, with a GUT-scale decay constant such an ALP overcloses the Universe, and cannot solve the strong CP problem. In this paper, we present a two axion model in which the 1 Hz ALP constitutes the entirety of the dark matter (DM) while the QCD axion solves the strong CP problem but contributes negligibly to the DM relic density. The mechanism to achieve the correct relic densities relies on low-scale inflation (mϕ ≲ Hinf ≲ 1 MeV), and we present explicit realisations of such a model. The scale in the axion potential leading to the 1 Hz axion generates a value for the strong CP phase which oscillates around $$ {\overline{\theta}}_{\mathrm{QCD}}\sim {10}^{-12} $$ θ ¯ QCD ∼ 10 − 12 , within reach of the proton storage ring electric dipole moment experiment. The 1 Hz axion is also in reach of near future laboratory and astrophysical searches.


1996 ◽  
Vol 11 (31) ◽  
pp. 5541-5567 ◽  
Author(s):  
MARINA GIBILISCO

The history of the universe after the recombination probably involves a reionization epoch, as the Gunn-Peterson test seems to suggest: if this is the case, the consequences of such a phenomenon should be relevant, both for the induced enhancement of the cosmic microwave background (CMB) polarization and for the possible damping of the CMB fluctuations on small angular scales (θ~1º). In this paper, I will study a model of reionization at redshifts z≤60 caused by the evaporation of primordial black holes; photon emission only from nonrotating black holes is considered. A system of coupled differential equations, giving the time evolution of the ionization degree x, of the plasma temperature Te and of the photon number density nγ, is solved in an analytical way: the results obtained show that such a kind of reionization is possible, being able to increase the ionization degree of the universe from a value x=0.002 (just after the recombination) to values near 1 (when the black holes evaporation ends). In particular, taking the evaporation redshift equal to the reionization redshift zR, one obtains total reionization (i.e. x=1) for 15≤zR≤30, while only a partial effect (x~0.75÷0.90) is present for higher values of zR (40≤zR≤60). The fast increase of x seems to agree with the predictions of an exponential reionization model discussed in a previous study of the CMB polarization induced by gravitational waves. The evolution of the plasma temperature Te is also estimated: it is affected in a less important way by the primordial black holes evaporation process, as we expect from the experimental FIRAS upper limit on the comptonization parameter yc (yc<2.5×10−5). The photoionization process here studied seems generally able to maintain the plasma in a ionized state without heating it up at very high temperatures; however, an improvement in the numerical calculation of Te is necessary in order to take into account in a more satisfactory way the collisional and excitation cooling, that can limit the increase of the plasma temperature. In this model, the density of primordial black holes (PBH’s) necessary to give a nonnegligible reionization is an important parameter: here I will consider various birth times tin and various initial density for the PBH’s, showing that the most effective reionization is obtained for zR≤30 and for PBH’s formed at tin~10−28 secafter the big bang. An estimate of their present density for this formation time gives a value ρ0=2.44×10−38g cm−3, corresponding to a present density parameter ΩPBH equal to 5.20×10−9. This result agrees with the experimental upper limit ΩPBH≤(7.6±2.6)×10−9 h(−1.95±0.15) A future improvement of this work will consider also massive particle emission from both rotating and nonrotating black holes and a spectrum taking also into account quarks and gluons jets emission.


Sign in / Sign up

Export Citation Format

Share Document