scholarly journals On the difference between FOPT and CIPT for hadronic tau decays

2021 ◽  
Vol 230 (12-13) ◽  
pp. 2625-2639 ◽  
Author(s):  
André H. Hoang ◽  
Christoph Regner

AbstractIn this article we review the results of our recent work on the difference between the Borel representations of $$\tau $$ τ hadronic spectral function moments obtained with the CIPT and FOPT methods. For the presentation of the theoretical results we focus on the large-$$\beta _0$$ β 0 approximation, where all expressions can be written down in closed form, and we comment on the generalization to full QCD. The results may explain the discrepancy in the behavior of the FOPT and CIPT series that has been the topic of intense discussions in previous literature and which represents a major part of the theoretical uncertainties in current strong coupling determinations from hadronic $$\tau $$ τ decays. The findings also imply that the OPE corrections for FOPT and CIPT differ and that the OPE corrections for CIPT do not have standard form.

2016 ◽  
Vol 31 (26) ◽  
pp. 1630024 ◽  
Author(s):  
Diogo Boito ◽  
Maarten Golterman ◽  
Kim Maltman ◽  
Santiago Peris

Recently, we extracted the strong coupling, [Formula: see text], from the revised ALEPH data for non-strange hadronic tau decays. Our analysis is based on a method previously used for the determination of the strong coupling from OPAL data. In our strategy, we employ different moments of the spectral functions both with and without pinching, including duality violations, in order to obtain fully self-consistent analyses that do not rely on untested assumptions (such as the smallness of higher dimension contributions in the operation product expansion (OPE)). Here we discuss the [Formula: see text] values obtained from the ALEPH and the OPAL data, the robustness of the analysis, as well as non-perturbative contributions from DVs and the OPE. We show that, although the [Formula: see text] determination is sound, non-perturbative effects limit the accuracy with which one can extract the strong coupling from tau decay data. Finally, we discuss the compatibility of the data sets and the possibility of a combined analysis.


2013 ◽  
Vol 28 (24) ◽  
pp. 1360004 ◽  
Author(s):  
GAUHAR ABBAS ◽  
B. ANANTHANARAYAN ◽  
IRINEL CAPRINI

We determine the strong coupling constant αs from the τ hadronic width using a renormalization group summed (RGS) expansion of the QCD Adler function. The main theoretical uncertainty in the extraction of αs is due to the manner in which renormalization group invariance is implemented, and the as yet uncalculated higher order terms in the QCD perturbative series. We show that new expansion exhibits good renormalization group improvement and the behavior of the series is similar to that of the standard CIPT expansion. The value of the strong coupling in [Formula: see text] scheme obtained with the RGS expansion is [Formula: see text]. The convergence properties of the new expansion can be improved by Borel transformation and analytic continuation in the Borel plane. This is discussed elsewhere in these issues.


2017 ◽  
Vol 912 ◽  
pp. 012003
Author(s):  
Diogo Boito ◽  
Maarten Golterman ◽  
Kim Maltman ◽  
Santiago Peris

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Jianming Zhu ◽  
Smita Ghosh ◽  
Weili Wu ◽  
Chuangen Gao

AbstractIn social networks, there exist many kinds of groups in which people may have the same interests, hobbies, or political orientation. Sometimes, group decisions are made by simply majority, which means that most of the users in this group reach an agreement, such as US Presidential Elections. A group is called activated if $$\beta$$ β percent of users are influenced in the group. Enterprise will gain income from all influenced groups. Simultaneously, to propagate influence, enterprise needs pay advertisement diffusion cost. Group profit maximization (GPM) problem aims to pick k seeds to maximize the expected profit that considers the benefit of influenced groups with the diffusion cost. GPM is proved to be NP-hard and the objective function is proved to be neither submodular nor supermodular. An upper bound and a lower bound which are difference of two submodular functions are designed. We propose a submodular–modular algorithm (SMA) to solve the difference of two submodular functions and SMA is shown to converge to a local optimal. We present an randomized algorithm based on weighted group coverage maximization for GPM and apply sandwich framework to get theoretical results. Our experiments verify the efficiency of our methods.


2011 ◽  
Vol 133 (4) ◽  
Author(s):  
Raed I. Bourisli ◽  
Adnan A. AlAnzi

This work aims at developing a closed-form correlation between key building design variables and its energy use. The results can be utilized during the initial design stages to assess the different building shapes and designs according to their expected energy use. Prototypical, 20-floor office buildings were used. The relative compactness, footprint area, projection factor, and window-to-wall ratio were changed and the resulting buildings performances were simulated. In total, 729 different office buildings were developed and simulated in order to provide the training cases for optimizing the correlation’s coefficients. Simulations were done using the VisualDOE TM software with a Typical Meteorological Year data file, Kuwait City, Kuwait. A real-coded genetic algorithm (GA) was used to optimize the coefficients of a proposed function that relates the energy use of a building to its four key parameters. The figure of merit was the difference in the ratio of the annual energy use of a building normalized by that of a reference building. The objective was to minimize the difference between the simulated results and the four-variable function trying to predict them. Results show that the real-coded GA was able to come up with a function that estimates the thermal performance of a proposed design with an accuracy of around 96%, based on the number of buildings tested. The goodness of fit, roughly represented by R2, ranged from 0.950 to 0.994. In terms of the effects of the various parameters, the area was found to have the smallest role among the design parameters. It was also found that the accuracy of the function suffers the most when high window-to-wall ratios are combined with low projection factors. In such cases, the energy use develops a potential optimum compactness. The proposed function (and methodology) will be a great tool for designers to inexpensively explore a wide range of alternatives and assess them in terms of their energy use efficiency. It will also be of great use to municipality officials and building codes authors.


Author(s):  
Makoto Yamamoto ◽  
Masaya Suzuki

Multi-Physics CFD Simulation will be one of key technologies in various engineering fields. There are two strategies to simulate a multi-physics phenomenon. One is “Strong Coupling”, and the other is “Weak Coupling”. Each can be employed, based on time-scales of physics embedded in a problem. That is, when a time-scale of one physics is nearly same as that of the other physics, we have to use Strong Coupling to take into account the interaction between two physics. On the other hand, when one time-scale is quite different from the other one, Weak Coupling can be applied. Considering the present computer performance, Strong Coupling is difficult to be used in engineering design processes now. Therefore, we are focusing on Weak Coupling, and it has been applied to a number of multi-physics CFD simulations in engineering. We have successfully simulated sand erosion, ice accretion, particle deposition, electro-chemical machining and so on, with using Weak Coupling method. In the present study, the difference between strong and weak couplings is briefly described, and two examples of our multi-physics CFD simulations are expressed. The numerical results indicate that Weak Coupling strategy is promising in a lot of multi-physics CFD simulations.


1999 ◽  
Vol 22 (2) ◽  
pp. 111-119
Author(s):  
P. T. Trakadas ◽  
C. N. Capsalis

There are several cases at which, in order to evaluate the crosstalk effect among transmission lines carrying useful signals, there is a need for probabilistic approach. This paper considers the problem of crosstalk estimation between transmission lines consisting of three conductors in a homogeneous surrounding medium, where the distance between the conductors is a random variable described by uniform distribution. The transmission lines are considered as electrically short. A closed-form equation is developed for the statistical distribution of the per-unit-length mutual inductance(lm)and an analytical one is described for the evaluation of the per-unit-length capacitance(cm). Theoretical results are compared with simulated ones for validation purposes.


1987 ◽  
Vol 101 (2) ◽  
pp. 323-342
Author(s):  
W. B. Jurkat ◽  
H. J. Zwiesler

In this article we investigate the meromorphic differential equation X′(z) = A(z) X(z), often abbreviated by [A], where A(z) is a matrix (all matrices we consider have dimensions 2 × 2) meromorphic at infinity, i.e. holomorphic in a punctured neighbourhood of infinity with at most a pole there. Moreover, X(z) denotes a fundamental solution matrix. Given a matrix T(z) which together with its inverse is meromorphic at infinity (a meromorphic transformation), then the function Y(z) = T−1(z) X(z) solves the differential equation [B] with B = T−1AT − T−1T [1,5]. This introduces an equivalence relation among meromorphic differential equations and leads to the question of finding a simple representative for each equivalence class, which, for example, is of importance for further function-theoretic examinations of the solutions. The first major achievement in this direction is marked by Birkhoff's reduction which shows that it is always possible to obtain an equivalent equation [B] where B(z) is holomorphic in ℂ ¬ {0} (throughout this article A ¬ B denotes the difference of these sets) with at most a singularity of the first kind at 0 [1, 2, 5, 6]. We call this the standard form. The question of how many further simplifications can be made will be answered in the framework of our reduction theory. For this purpose we introduce the notion of a normalized standard equation [A] (NSE) which is defined by the following conditions:(i) , where r ∈ ℕ and Ak are constant matrices, (notation: )(ii) A(z) has trace tr for some c ∈ ℂ,(iii) Ar−1 has different eigenvalues,(iv) the eigenvalues of A−1 are either incongruent modulo 1 or equal,(v) if A−1 = μI, then Ar−1 is diagonal,(vi) Ar−1 and A−1 are triangular in opposite ways,(vii) a12(z) is monic (leading coefficient equals 1) unless a12 ≡ 0; furthermore a21(z) is monic in case that a12 ≡ 0 but a21 ≢ 0.


Sign in / Sign up

Export Citation Format

Share Document