standard formalism
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 12)

H-INDEX

5
(FIVE YEARS 1)

2022 ◽  
Vol 18 (1) ◽  
pp. 1-26
Author(s):  
Mario Simoni ◽  
Giovanni Amedeo Cirillo ◽  
Giovanna Turvani ◽  
Mariagrazia Graziano ◽  
Maurizio Zamboni

Classical simulation of Noisy Intermediate Scale Quantum computers is a crucial task for testing the expected performance of real hardware. The standard approach, based on solving Schrödinger and Lindblad equations, is demanding when scaling the number of qubits in terms of both execution time and memory. In this article, attempts in defining compact models for the simulation of quantum hardware are proposed, ensuring results close to those obtained with standard formalism. Molecular Nuclear Magnetic Resonance quantum hardware is the target technology, where three non-ideality phenomena—common to other quantum technologies—are taken into account: decoherence, off-resonance qubit evolution, and undesired qubit-qubit residual interaction. A model for each non-ideality phenomenon is embedded into a MATLAB simulation infrastructure of noisy quantum computers. The accuracy of the models is tested on a benchmark of quantum circuits, in the expected operating ranges of quantum hardware. The corresponding outcomes are compared with those obtained via numeric integration of the Schrödinger equation and the Qiskit’s QASMSimulator. The achieved results give evidence that this work is a step forward towards the definition of compact models able to provide fast results close to those obtained with the traditional physical simulation strategies, thus paving the way for their integration into a classical simulator of quantum computers.


2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-28 ◽  
Author(s):  
Ugo Dal Lago ◽  
Francesco Gavazzo

Graded modal types systems and coeffects are becoming a standard formalism to deal with context-dependent, usage-sensitive computations, especially when combined with computational effects. From a semantic perspective, effectful and coeffectful languages have been studied mostly by means of denotational semantics and almost nothing has been done from the point of view of relational reasoning. This gap in the literature is quite surprising, since many cornerstone results — such as non-interference , metric preservation , and proof irrelevance — on concrete coeffects are inherently relational. In this paper, we fill this gap by developing a general theory and calculus of program relations for higher-order languages with combined effects and coeffects. The relational calculus builds upon the novel notion of a corelator (or comonadic lax extension ) to handle coeffects relationally. Inside such a calculus, we define three notions of effectful and coeffectful program refinements: contextual approximation , logical preorder , and applicative similarity . These are the first operationally-based notions of program refinement (and, consequently, equivalence) for languages with combined effects and coeffects appearing in the literature. We show that the axiomatics of a corelator (together with the one of a relator) is precisely what is needed to prove all the aforementioned program refinements to be precongruences, this way obtaining compositional relational techniques for reasoning about combined effects and coeffects.


2021 ◽  
Vol 22 (S13) ◽  
Author(s):  
Sara Pidò ◽  
Pietro Crovari ◽  
Franca Garzotto

Abstract Background With the advancements of Next Generation Techniques, a tremendous amount of genomic information has been made available to be analyzed by means of computational methods. Bioinformatics Tertiary Analysis is a complex multidisciplinary process that represents the final step of the whole bioinformatics analysis pipeline. Despite the popularity of the subject, the Bioinformatics Tertiary Analysis process has not yet been specified in a systematic way. The lack of a reference model results into a plethora of technological tools that are designed mostly on the data and not on the human process involved in Tertiary Analysis, making such systems difficult to use and to integrate. Methods To address this problem, we propose a conceptual model that captures the salient characteristics of the research methods and human tasks involved in Bioinformatics Tertiary Analysis. The model is grounded on a user study that involved bioinformatics specialists for the elicitation of a hierarchical task tree representing the Tertiary Analysis process. The outcome was refined and validated using the results of a vast survey of the literature reporting examples of Bioinformatics Tertiary Analysis activities. Results The final hierarchical task tree was then converted into an ontological representation using an ontology standard formalism. The results of our research provides a reference process model for Tertiary Analysis that can be used both to analyze and to compare existing tools, or to design new tools. Conclusions To highlight the potential of our approach and to exemplify its concrete applications, we describe a new bioinformatics tool and how the proposed process model informed its design.


2021 ◽  
Vol 2021 (6) ◽  
Author(s):  
Andrzej Hryczuk ◽  
Maxim Laletin

Abstract We study a novel dark matter production mechanism based on the freeze-in through semi-production, i.e. the inverse semi-annihilation processes. A peculiar feature of this scenario is that the production rate is suppressed by a small initial abundance of dark matter and consequently creating the observed abundance requires much larger coupling values than for the usual freeze-in. We provide a concrete example model exhibiting such production mechanism and study it in detail, extending the standard formalism to include the evolution of dark matter temperature alongside its number density and discuss the importance of this improved treatment. Finally, we confront the relic density constraint with the limits and prospects for the dark matter indirect detection searches. We show that, even if it was never in full thermal equilibrium in the early Universe, dark matter could, nevertheless, have strong enough present-day annihilation cross section to lead to observable signals.


2020 ◽  
Vol 17 (09) ◽  
pp. 2050145 ◽  
Author(s):  
Mir Faizal ◽  
Davood Momeni

As quantum optical phenomena are based on Maxwell’s equations, and it is becoming important to understand quantum optical phenomena at short distances, so it is important to analyze quantum optics using short distance corrected Maxwell’s equation. Maxwell’s action can be obtained from quantum electrodynamics using the framework of effective field theory, and so the leading order short distance corrections to Maxwell’s action can also be obtained from the derivative expansion of the same effective field theory. Such short distance corrections will be universal for all quantum optical systems, and they will affect all short distance quantum optical phenomena. In this paper, we will analyze the form of such corrections, and demonstrate the standard formalism of quantum optics can still be used (with suitable modifications) to analyze quantum optical phenomena from this short distance corrected Maxwell’s actions.


2020 ◽  
Author(s):  
Sebastian Dick ◽  
Marivi Fernandez-Serra

<div>Density Functional Theory (DFT) is the standard formalism to study the electronic structure</div><div>of matter at the atomic scale. In Kohn-Sham DFT simulations, the balance between accuracy</div><div>and computational cost depends on the choice of exchange and correlation functional, which only</div><div>exists in approximate form. Here we propose a framework to create density functionals using</div><div>supervised machine learning, termed NeuralXC. These machine-learned functionals are designed to</div><div>lift the accuracy of baseline functionals towards that are provided by more accurate methods while</div><div>maintaining their efficiency. We show that the functionals learn a meaningful representation of the</div><div>physical information contained in the training data, making them transferable across systems. A</div><div>NeuralXC functional optimized for water outperforms other methods characterizing bond breaking</div><div>and excels when comparing against experimental results. This work demonstrates that NeuralXC</div><div>is a first step towards the design of a universal, highly accurate functional valid for both molecules</div><div>and solids.</div>


2020 ◽  
Author(s):  
Sebastian Dick ◽  
Marivi Fernandez-Serra

<div>Density Functional Theory (DFT) is the standard formalism to study the electronic structure</div><div>of matter at the atomic scale. In Kohn-Sham DFT simulations, the balance between accuracy</div><div>and computational cost depends on the choice of exchange and correlation functional, which only</div><div>exists in approximate form. Here we propose a framework to create density functionals using</div><div>supervised machine learning, termed NeuralXC. These machine-learned functionals are designed to</div><div>lift the accuracy of baseline functionals towards that are provided by more accurate methods while</div><div>maintaining their efficiency. We show that the functionals learn a meaningful representation of the</div><div>physical information contained in the training data, making them transferable across systems. A</div><div>NeuralXC functional optimized for water outperforms other methods characterizing bond breaking</div><div>and excels when comparing against experimental results. This work demonstrates that NeuralXC</div><div>is a first step towards the design of a universal, highly accurate functional valid for both molecules</div><div>and solids.</div>


2020 ◽  
Vol 8 (5) ◽  
Author(s):  
Gabriel Cuomo ◽  
Luca Vecchi ◽  
Andrea Wulzer

The transition between the broken and unbroken phases of massive gauge theories, namely the rearrangement of longitudinal and Goldstone degrees of freedom that occurs at high energy, is not manifestly smooth in the standard formalism. The lack of smoothness concretely shows up as an anomalous growth with energy of the longitudinal polarization vectors, as they emerge in Feynman rules both for real on-shell external particles and for virtual particles from the decomposition of the gauge field propagator. This makes the characterization of Feynman amplitudes in the high-energy limit quite cumbersome, which in turn poses peculiar challenges in the study of Electroweak processes at energies much above the Electroweak scale. We develop a Lorentz-covariant formalism where polarization vectors are well-behaved and, consequently, energy power-counting is manifest at the level of individual Feynman diagrams. This allows us to prove the validity of the Effective $W$ Approximation and, more generally, the factorization of collinear emissions and to compute the corresponding splitting functions at the tree-level order. Our formalism applies at all orders in perturbation theory, for arbitrary gauge groups and generic linear gauge-fixing functionals. It can be used to simplify Standard Model loop calculations by performing the high-energy expansion directly on the Feynman diagrams. This is illustrated by computing the radiative corrections to the decay of the top quark.


2019 ◽  
Vol 64 (11) ◽  
pp. 991 ◽  
Author(s):  
S. Mignemi

We review the main features of the relativistic Snyder model and its generalizations. We discuss the quantum field theory on this background using the standard formalism of noncommutative QFT and discuss the possibility of obtaining a finite theory.


2019 ◽  
Author(s):  
Sebastian Dick ◽  
Marivi Fernandez-Serra

Density Functional Theory (DFT) is the standard formalism to study the electronic structure of matter<br>at the atomic scale. The balance between accuracy and computational cost that<br>DFT-based simulations provide allows researchers to understand the structural and dynamical properties of increasingly large and complex systems at the quantum mechanical level.<br>In Kohn-Sham DFT, this balance depends on the choice of exchange and correlation functional, which only exists<br>in approximate form. Increasing the non-locality of this functional and climbing the figurative Jacob's ladder of DFT, one can systematically reduce the amount of approximation involved and thus approach the exact functional. Doing this, however, comes at the price of increased computational cost, and so, for extensive systems, the predominant methods of choice can still be found within the lower-rung approximations. <br>Here we propose a framework to create highly accurate density functionals by using supervised machine learning, termed NeuralXC. These machine-learned functionals are designed to lift the accuracy of local and semilocal functionals to that provided by more accurate methods while maintaining their efficiency. We show that the functionals learn a meaningful representation of the physical information contained in the training data, making them transferable across systems. We further demonstrate how a functional optimized on water can reproduce experimental results when used in molecular dynamics simulations. Finally, we discuss the effects that our method has on self-consistent electron densities by comparing these densities to benchmark coupled-cluster results.


Sign in / Sign up

Export Citation Format

Share Document