scholarly journals Microlensing predictions: Impact of Galactic disc dynamical models

Author(s):  
Hongjing Yang ◽  
Shude Mao ◽  
Weicheng Zang ◽  
Xiangyu Zhang

Abstract Galactic model plays an important role in the microlensing field, not only for analyses of individual events but also for statistics of the ensemble of events. However, the Galactic models used in the field varies, and some are unrealistically simplified. Here we tested three Galactic disc dynamic models, the first is a simple standard model that was widely used in this field, whereas the other two consider the radial dependence of the velocity dispersion, and in the last model, the asymmetric drift. We found that for a typical lens mass ML = 0.5 M⊙, the two new dynamical models predict $\sim 16{{\ \rm per\ cent}}$ or $\sim 5{{\ \rm per\ cent}}$ less long-timescale events (e.g. microlensing timescale tE > 300 days) and $\sim 5{{\ \rm per\ cent}}$ and $\sim 3.5{{\ \rm per\ cent}}$ more short-timescale events (tE < 3 days) than the standard model. Moreover, the microlensing event rate as a function of Einstein radius θE or microlensing parallax πE also shows some model dependence (a few per cent). The two new models also have an impact on the total microlensing event rate. This result will also to some degree affect the Bayesian analysis of individual events, but overall, the impact is small. However, we still recommend that modelers should be more careful when choosing the Galactic model, especially in statistical works involving Bayesian analyses of a large number of events. Additionally, we find the asymptotic power-law behaviors in both θE and πE distributions, and we provide a simple model to understand them.

2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
S. C. İnan ◽  
M. Köksal

We examine the effect of excited neutrinos on the annihilation of relic neutrinos with ultrahigh energy cosmic neutrinos for theνν¯→γγprocess. The contributions of the excited neutrinos to the neutrino-photon decoupling temperature are calculated. We see that photon-neutrino decoupling temperature can be significantly reduced below the obtained value of the Standard Model with the impact of excited neutrinos.


2017 ◽  
Vol 14 (3) ◽  
Author(s):  
Amy Farmer ◽  
Fabio Méndez ◽  
Andrew Samuel

Abstract We study the effectiveness of licenses in environments with corruption. We expand the standard model so that bribery is feasible not only when licenses are granted but also when enforced or verified. This modification alters many prior results on bribery and licensing significantly. Specifically, we show that in some cases penalties for bribery at the license-granting stage complement penalties for bribery at the permit-enforcement stage. In other cases, they act as substitutes for each other. These results are especially important for often used regulatory policies in which licenses are used in conjunction with some form of subsequent license verification. Thus, our model suggests that studying the impact of bribery at the license-granting stage should not be conducted without simultaneously studying bribery at the permit verification stage.


2020 ◽  
Vol 35 (01) ◽  
pp. 1930018
Author(s):  
Diego Guadagnoli

This paper describes the work pursued in the years 2008–2013 on improving the Standard Model prediction of selected flavor-physics observables. The latter includes: (1) [Formula: see text], that quantifies indirect CP violation in the [Formula: see text] system and (2) the very rare decay [Formula: see text], recently measured at the LHC. Concerning point (1), the paper describes our reappraisal of the long-distance contributions to [Formula: see text],[Formula: see text] that have permitted to unveil a potential tension between CP violation in the [Formula: see text]- and [Formula: see text]-system. Concerning point (2), the paper gives a detailed account of various systematic effects pointed out in Ref. 4 and affecting the Standard Model [Formula: see text] decay rate at the level of 10% — hence large enough to be potentially misinterpreted as nonstandard physics, if not properly included. The paper further describes the multifaceted importance of the [Formula: see text] decays as new physics probes, for instance how they compare with [Formula: see text]-peak observables at LEP, following the effective-theory approach of Ref. 5. Both cases (1) and (2) offer clear examples in which the pursuit of precision in Standard Model predictions offered potential avenues to discovery. Finally, this paper describes the impact of the above results on the literature, and what is the further progress to be expected on these and related observables.


Author(s):  
Robert Fleischer ◽  
Ruben Jaarsma ◽  
Gabriël Koole

Abstract Data in B-meson decays indicate violations of lepton flavour universality, thereby raising the question about such phenomena in the charm sector. We perform a model-independent analysis of NP contributions in (semi)-leptonic decays of $$D_{(s)}$$D(s) mesons which originate from $$c \rightarrow d \bar{{\ell }} \nu _l$$c→dℓ¯νl and $$c \rightarrow s \bar{{\ell }} \nu _{\ell }$$c→sℓ¯νℓ charged-current interactions. Starting from the most general low-energy effective Hamiltonian containing four-fermion operators and the corresponding short-distance coefficients, we explore the impact of new (pseudo)-scalar, vector and tensor operators and constrain their effects through the interplay with current data. We pay special attention to the elements $$|V_{cd}|$$|Vcd| and $$|V_{cs}|$$|Vcs| of the Cabibbo–Kobayashi–Maskawa matrix and extract them from the $$D_{(s)}$$D(s) decays in the presence of possible NP decay contributions, comparing them with determinations utilizing unitarity. We find a picture in agreement with the Standard Model within the current uncertainties. Using the results from our analysis, we make also predictions for leptonic $$D_{(s)}^+ \rightarrow e^+ \nu _e$$D(s)+→e+νe modes which could be hugely enhanced with respect to their tiny Standard Model branching ratios. It will be interesting to apply our strategy at the future high-precision frontier.


2015 ◽  
Vol 72 (11) ◽  
pp. 4297-4318 ◽  
Author(s):  
Todd P. Lane ◽  
Mitchell W. Moncrieff

Abstract Dynamical models of organized mesoscale convective systems have identified the important features that help maintain their overarching structure and longevity. The standard model is the trailing stratiform archetype, featuring a front-to-rear ascending circulation, a mesoscale downdraft circulation, and a cold pool/density current that affects the propagation speed and the maintenance of the system. However, this model does not represent all types of mesoscale convective systems, especially in moist environments where the evaporation-driven cold pools are weak and the convective inhibition is small. Moreover, questions remain about the role of gravity waves in creating and maintaining organized systems and affecting their propagation speed. This study presents simulations and dynamical models of self-organizing convection in a moist, low–convective inhibition environment and examines the long-lived convective regimes that emerge spontaneously. This paper, which is Part I of this study, specifically examines the structure, kinematics, and maintenance of long-lived, upshear-propagating convective systems that differ in important respects from the standard model of long-lived convective systems. Linear theory demonstrates the role of ducted gravity waves in maintaining the long-lived, upshear-propagating systems. A steady nonlinear model approximates the dynamics of upshear-propagating density currents that are key to the maintenance of the mesoscale convective system.


2010 ◽  
Vol 25 (02n03) ◽  
pp. 564-572
Author(s):  
MAXIM POSPELOV

I consider models of light super-weakly interacting cold dark matter, with [Formula: see text] mass, focusing on bosonic candidates such as pseudoscalars and vectors. I analyze the cosmological abundance, the γ-background created by particle decays, the impact on stellar processes due to cooling, and the direct detection capabilities in order to identify classes of models that pass all the constraints. In certain models, variants of photoelectric (or axioelectric) absorption of dark matter in direct-detection experiments can provide a sensitivity to the superweak couplings to the Standard Model which is superior to all existing indirect constraints. In all models studied, the annual modulation of the direct-detection signal is at the currently unobservable level of O(10-5).


1997 ◽  
Vol 12 (23) ◽  
pp. 4109-4154 ◽  
Author(s):  
Peter B. Renton

The present status of precision electroweak data is reviewed. These data include LEP measurements of the mass and width of the Z, together with various measurements on the Z-fermion couplings. These data are compared to, and combined with, data from the SLC on the left–right polarized asymmetry, A LR , and the left–right forward–backward asymmetries for b and c quarks. These measurements are combined with hadron collider measurements from the Tevatron and CERN on the mass of the W boson, mW, as well as other electroweak data, in global electroweak fits in which various Standard Model parameters are determined. A comparison is made between the results of direct measurements of mW and the top-quark mass, mt, as determined from the Tevatron, with the indirect results coming from electroweak radiative corrections. Using all precision electroweak data, fits are also made to determine limits on the mass of the Higgs boson, mH. The influence on these limits of specific measurements, particularly those which are somewhat inconsistent with the Standard Model, is explored. The data are also analyzed in terms of the quasi model independent ∊ variables. Improvements in the determination of all of these quantities are expected when the Z data at LEP are fully analyzed, and further measurements on A LR and related asymmetries performed at the SLC. In addition, substantial improvements in the determination of mW are expected from measurements at the Tevatron and in the second phase of LEP. An estimate is made of the likely precision of these data, and the implications of the impact of these data on precision electroweak tests are discussed. This discussion is made both in terms of the Standard Model and also in the context of the quasi model independent ∊ variables.


2011 ◽  
Vol 11 (2) ◽  
pp. 3857-3884 ◽  
Author(s):  
W. Feng ◽  
M. P. Chipperfield ◽  
S. Davies ◽  
G. W. Mann ◽  
K. S. Carslaw ◽  
...  

Abstract. A three-dimensional (3-D) chemical transport model (CTM), SLIMCAT, has been used to quantify the effect of denitrification on ozone loss for the Arctic winter/spring 2004/05. The simulated HNO3 is found to be highly sensitive to the polar stratospheric cloud (PSC) scheme used in the model. Here the standard SLIMCAT full chemistry model, which uses a thermodynamic equilibrium PSC scheme, overpredicts the Arctic ozone loss for Arctic winter/spring 2004/05 due to the overestimation of denitrification and stronger chlorine activation than observed. A model run with a detailed microphysical denitrification scheme, DLAPSE (Denitrification by Lagrangian Particle Sedimentation), is less denitrified than the standard model run and better reproduces the observed HNO3 as measured by Airborne SUbmillimeter Radiometer (ASUR) and Aura Microwave Limb Sounder (MLS) instruments. The overestimated denitrification causes a small overestimation of Arctic polar ozone loss (~5–10% at ~17 km) by the standard model. Use of the DLAPSE scheme improves the simulation of Arctic ozone depletion compared with the inferred partial column ozone loss from ozonesondes and satellite data. Overall, denitrification is responsible for a ~30% enhancement in O3 depletion for Arctic winter/spring 2004/05, suggesting that the successful simulation of the impact of denitrification on Arctic ozone depletion also requires the use of a detailed microphysical PSC scheme in the model.


2000 ◽  
Vol 18 (2) ◽  
pp. 119-130
Author(s):  
Riccardo Fiorito

Abstract By using a small discrete-time model we evaluate the impact of distortionary taxation on the government debt-to-GDP ratio. Once the standard model is modified accordingly, it appears that the increase of taxation has a growth cost which increases as long as die debt-to-GDP ratio rises. The empirical implementation uses data drawn from recent Italy’s record and is based on realistic shocks to the relevant parameters. A major finding is the importance of the debt level - not only of the dynamics - to stabilize the debt-to-GDP ratio. A second finding is that sustainable tax rates are remarkably lower than those prevailing in Italy since the 80s.


Sign in / Sign up

Export Citation Format

Share Document