crucial ingredient
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 42)

H-INDEX

10
(FIVE YEARS 3)

2021 ◽  
Vol 2021 (12) ◽  
Author(s):  
Ahmadullah Zahed

Abstract This paper presents the fascinating correspondence between the geometric function theory and the scattering amplitudes with O(N) global symmetry. A crucial ingredient to show such correspondence is a fully crossing symmetric dispersion relation in the z-variable, rather than the fixed channel dispersion relation. We have written down fully crossing symmetric dispersion relation for O(N) model in z-variable for three independent combinations of isospin amplitudes. We have presented three independent sum rules or locality constraints for the O(N) model arising from the fully crossing symmetric dispersion relations. We have derived three sets of positivity conditions. We have obtained two-sided bounds on Taylor coefficients of physical Pion amplitudes around the crossing symmetric point (for example, π+π−→ π0π0) applying the positivity conditions and the Bieberbach-Rogosinski inequalities from geometric function theory.


2021 ◽  
Vol 2021 (11) ◽  
pp. 113406
Author(s):  
Maria Chiara Angelini ◽  
Paolo Fachin ◽  
Simone de Feo

Abstract Over-parametrization was a crucial ingredient for recent developments in inference and machine-learning fields. However a good theory explaining this success is still lacking. In this paper we study a very simple case of mismatched over-parametrized algorithm applied to one of the most studied inference problem: the planted clique problem. We analyze a Monte Carlo (MC) algorithm in the same class of the famous Jerrum algorithm. We show how this MC algorithm is in general suboptimal for the recovery of the planted clique. We show however how to enhance its performances by adding a (mismatched) parameter: the temperature; we numerically find that this over-parametrized version of the algorithm can reach the supposed algorithmic threshold for the planted clique problem.


2021 ◽  
Vol Volume 2 (Original research articles) ◽  
Author(s):  
Felix Harder

It is known in the literature that local minimizers of mathematical programs with complementarity constraints (MPCCs) are so-called M-stationary points, if a weak MPCC-tailored Guignard constraint qualification (called MPCC-GCQ) holds. In this paper we present a new elementary proof for this result. Our proof is significantly simpler than existing proofs and does not rely on deeper technical theory such as calculus rules for limiting normal cones. A crucial ingredient is a proof of a (to the best of our knowledge previously open) conjecture, which was formulated in a Diploma thesis by Schinabeck.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 535
Author(s):  
Justin Yirka ◽  
Yiğit Subaşı

One strategy to fit larger problems on NISQ devices is to exploit a tradeoff between circuit width and circuit depth. Unfortunately, this tradeoff still limits the size of tractable problems since the increased depth is often not realizable before noise dominates. Here, we develop qubit-efficient quantum algorithms for entanglement spectroscopy which avoid this tradeoff. In particular, we develop algorithms for computing the trace of the n-th power of the density operator of a quantum system, Tr(ρn), (related to the Rényi entropy of order n) that use fewer qubits than any previous efficient algorithm while achieving similar performance in the presence of noise, thus enabling spectroscopy of larger quantum systems on NISQ devices. Our algorithms, which require a number of qubits independent of n, are variants of previous algorithms with width proportional to n, an asymptotic difference. The crucial ingredient in these new algorithms is the ability to measure and reinitialize subsets of qubits in the course of the computation, allowing us to reuse qubits and increase the circuit depth without suffering the usual noisy consequences. We also introduce the notion of effective circuit depth as a generalization of standard circuit depth suitable for circuits with qubit resets. This tool helps explain the noise-resilience of our qubit-efficient algorithms and should aid in designing future algorithms. We perform numerical simulations to compare our algorithms to the original variants and show they perform similarly when subjected to noise. Additionally, we experimentally implement one of our qubit-efficient algorithms on the Honeywell System Model H0, estimating Tr(ρn) for larger n than possible with previous algorithms.


Author(s):  
Luca Incurvati ◽  
Julian J. Schlöder

AbstractMany classically valid meta-inferences fail in a standard supervaluationist framework. This allegedly prevents supervaluationism from offering an account of good deductive reasoning. We provide a proof system for supervaluationist logic which includes supervaluationistically acceptable versions of the classical meta-inferences. The proof system emerges naturally by thinking of truth as licensing assertion, falsity as licensing negative assertion and lack of truth-value as licensing rejection and weak assertion. Moreover, the proof system respects well-known criteria for the admissibility of inference rules. Thus, supervaluationists can provide an account of good deductive reasoning. Our proof system moreover brings to light how one can revise the standard supervaluationist framework to make room for higher-order vagueness. We prove that the resulting logic is sound and complete with respect to the consequence relation that preserves truth in a model of the non-normal modal logic NT. Finally, we extend our approach to a first-order setting and show that supervaluationism can treat vagueness in the same way at every order. The failure of conditional proof and other meta-inferences is a crucial ingredient in this treatment and hence should be embraced, not lamented.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 998
Author(s):  
Benedetto Militello ◽  
Anna Napoli

A system consisting of two qubits and a resonator is considered in the presence of different sources of noise, bringing to light the possibility of making the two qubits evolve in a synchronized way. A direct qubit–qubit interaction turns out to be a crucial ingredient, as well as the dissipation processes involving the resonator. The detrimental role of the local dephasing of the qubits is also taken into account.


Author(s):  
Nathan Andrews

While there is a voluminous scholarship focused on the nexus between resource extraction and development, the issue of how the harms and benefits of extraction are differentiated among several stakeholders based on factors such as their access to power, authority over decision-making, social status, and gender require further examination. This paper combines theoretical insights from assemblage thinking and political ecology to unpack the intertwined range of actors, networks, and structures of power that inform the differentiated benefits and harms of hydrocarbon extraction in Ghana. It can be observed from this study that power serves as a crucial ingredient in understanding relations among social groups, including purported beneficiaries of extractive activities, and other actors that constitute the networked hydrocarbon industry. Scale also reveals the relational nature of the different levels (i.e. global, national, sub-national, local) at which the socio-ecological ‘goods’ and ‘bads’ of hydrocarbon extraction become manifest. Based on these findings, the paper contributes to ongoing scholarly and policy discussions around extractivism by showing how a multi-scalar analysis reveals a more complex picture of the distributional politics, power asymmetry, and injustice that underpin resource extraction.


2021 ◽  
Vol 2021 (6) ◽  
Author(s):  
Christian W. Bauer ◽  
Nicholas L. Rodd ◽  
Bryan R. Webber

Abstract We compute the decay spectrum for dark matter (DM) with masses above the scale of electroweak symmetry breaking, all the way to the Planck scale. For an arbitrary hard process involving a decay to the unbroken standard model, we determine the prompt distribution of stable states including photons, neutrinos, positrons, and antiprotons. These spectra are a crucial ingredient in the search for DM via indirect detection at the highest energies as being probed in current and upcoming experiments including IceCube, HAWC, CTA, and LHAASO. Our approach improves considerably on existing methods, for instance, we include all relevant electroweak interactions.


Author(s):  
Younes Zouani ◽  
Abdelmounaim Abdali ◽  
Charafeddine Ait Zaouiat

<p>The dynamic composition of components is an emerging concept that aims to allow a new application to be constructed based on a user’s request. Three main ingredients must be used to achieve the dynamic composition of components: goal, scenario, and context-awareness. These three ingredients must be completed by artificial intelligence (AI) techniques that help process discovery and storage. This paper presents framework architecture for the dynamic composition of components that can extract expressed goals, deduce implicit ones using AI. The goal will be combined with pertinent contextual data, to compose the relevant components that meet the real requirements of the user. The core element of our proposed architecture is the composer component that (i) negotiate user goal, (ii) load the associated scenarios and choose the most suitable one based on user goal and profile, (iii) get binding information of scenario’s actions, (iv) compose the loaded actions, and (v) store the new component as a tree of actions enabled by contextual or process constraint. In our e-learning proven of concept, we consider five components: composer component, reader component, formatter component, matcher component, and executor component. These five components stipulate that a course is the combination of existing/scrapped chapters that have been adapted to a user profile in terms of language, level of difficulty, and prerequisite. The founding result shows that AI is not only an element that enhances system performance in terms of timing response but a crucial ingredient that guides the dynamic composition of components.</p><div style="display: none;"> </div>


2021 ◽  
pp. 1-16
Author(s):  
Samuel Trachtman

Scholars have long understood the American states as “laboratories of democracy,” exploring how mechanisms of learning and competition lead to the diffusion of successful state policy experiments across the federal system. Drawing from policy feedback literature, I develop a new framework for studying policy interdependence in American federalism. I argue that state policies can, in addition to promoting learning and competition, also feed into the interest group politics in other states. Broadly speaking, the organized interests that benefit from, and are strengthened by, particular policy reforms might apply newfound strength to propagate them. Empirically, I study rooftop solar policy, an area in which state-level decisions have been fundamental to industry growth and the emergence of installers as political actors. Bringing together a variety of administrative, lobbying, and policy data, I demonstrate that solar installers used resources accumulated in early adopter of favorable rooftop solar policies to influence policy decisions elsewhere. For reformers, I suggest that subnational policy can be a crucial ingredient in building coalitions for (geographically) broader policy reform.


Sign in / Sign up

Export Citation Format

Share Document