Stochastic metastability and Hamiltonian dynamics

Stochastic metastability is stability weakened by chance. In a definition of stability certain conditions must be satisfied for sufficiently small displacements. Corresponding to a definition of stability we introduce a definition of (stochastic) metastability, in which the same conditions must be satisfied with probability approaching unity as the maximum displacement approaches zero. Measurable sets are essential to metastability as open sets are to stability. Because of Arnol’d diffusion, the invariant tori of conservative Hamiltonian systems of n ≽ 3 degrees of freedom are not usually dynamically stable with respect to changes in initial conditions. However, we prove using the Lebesgue density theorem that if the proper invariant tori of phase space fill a bounded domain of positive measure, then they are almost all dynamically metastable. Metastability is difficult to distinguish from stability in physical systems or numerical experiments, so the result is consistent with the difficulty of observing Arnol’d diffusion in the neighbourhood of an invariant torus in practice. Probability and measure are essential to the general theory of stability of nonlinear systems.

2011 ◽  
Vol 20 (supp01) ◽  
pp. 94-101
Author(s):  
ROBERTO A SUSSMAN

If our cosmic location lies within a large-scale under–dense region or "void", then current cosmological observations can be explained without resorting to a cosmological constant or to an exotic and elusive source like "dark energy". If we further assume this void region to be spherical (as almost all current void models do), then fitting observational data severely constrains our position to be very near the void center, which is a very special and unlikely observation point. We argue in this article that existing spherical void models must be regarded as gross approximations that arise by smoothing out more realistic non–spherical configurations that may fit observations without the limitations imposed by spherical symmetry. In particular, the class of quasi–spherical Szekeres models provides sufficient degrees of freedom to describe the evolution of non–spherical inhomogeneities, including a configuration consisting of several elongated supercluster–like overdense filaments with large underdense regions between them. We summarize a recently published example of such configuration, showing that it yields a reasonable coarse-grained description of realistic observed structures. While the density distribution is not spherically symmetric, its proper volume average yields a spherical density void profile of 250 Mpc that roughly agrees with observations. Also, once we consider our location to lie within a non-spherical void, the definition of a "center" location becomes more nuanced, and thus the constraints placed by the fitting of observations on our position with respect to this location become less restrictive.


2020 ◽  
Vol 23 (2) ◽  
pp. 133-148
Author(s):  
Anastasios Bountis

In this paper, I review a number of results that my co-workers and I have obtained in the field of 1-Dimensional (1D) Hamiltonian lattices. This field has grown in recent years, due to its importance in revealing many phenomena that concern the occurrence of chaotic behavior in conservative physical systems with a high number of degrees of freedom. After the establishment of the Kolomogorov-Arnol'd-Moser (KAM) theory in the 1960s, a wealth of results were obtained about such systems as small perturbations of completely integrable Ndegree- of-freedom Hamiltonians, where ordered motion is dominant in the form of invariant tori. Since the 1980s, however, and particularly in the last two decades, there has been great progress in understanding the properties of Hamiltonian 1D lattices far from the KAM regime, where "weak" and "strong" forms of chaos begin to play an increasingly significant role. It is the purpose of this review to address and highlight some of these advances, in which the author has made several contributions concerning the dynamics and statistics of these lattices.


Author(s):  
Kaloshin Vadim ◽  
Zhang Ke

This introductory chapter provides an overview of Arnold diffusion. The famous question called the ergodic hypothesis, formulated by Maxwell and Boltzmann, suggests that for a typical Hamiltonian on a typical energy surface, all but a set of initial conditions of zero measure have trajectories dense in this energy surface. However, Kolmogorov-Arnold-Moser (KAM) theory showed that for an open set of (nearly integrable) Hamiltonian systems, there is a set of initial conditions of positive measure with almost periodic trajectories. This disproved the ergodic hypothesis and forced reconsideration of the problem. For autonomous nearly integrable systems of two degrees or time-periodic systems of one and a half degrees of freedom, the KAM invariant tori divide the phase space. These invariant tori forbid large scale instability. When the degrees of freedoms are larger than two, large scale instability is indeed possible, as evidenced by the examples given by Vladimir Arnold. The chapter explains that the book answers the question of the typicality of these instabilities in the two and a half degrees of freedom case.


Author(s):  
Flavio Mercati

This chapter explains in detail the current Hamiltonian formulation of SD, and the concept of Linking Theory of which (GR) and SD are two complementary gauge-fixings. The physical degrees of freedom of SD are identified, the simple way in which it solves the problem of time and the problem of observables in quantum gravity are explained, and the solution to the problem of constructing a spacetime slab from a solution of SD (and the related definition of physical rods and clocks) is described. Furthermore, the canonical way of coupling matter to SD is introduced, together with the operational definition of four-dimensional line element as an effective background for matter fields. The chapter concludes with two ‘structural’ results obtained in the attempt of finding a construction principle for SD: the concept of ‘symmetry doubling’, related to the BRST formulation of the theory, and the idea of ‘conformogeometrodynamics regained’, that is, to derive the theory as the unique one in the extended phase space of GR that realizes the symmetry doubling idea.


Nanophotonics ◽  
2020 ◽  
Vol 9 (13) ◽  
pp. 4117-4126 ◽  
Author(s):  
Igor Gershenzon ◽  
Geva Arwas ◽  
Sagie Gadasi ◽  
Chene Tradonsky ◽  
Asher Friesem ◽  
...  

AbstractRecently, there has been growing interest in the utilization of physical systems as heuristic optimizers for classical spin Hamiltonians. A prominent approach employs gain-dissipative optical oscillator networks for this purpose. Unfortunately, these systems inherently suffer from an inexact mapping between the oscillator network loss rate and the spin Hamiltonian due to additional degrees of freedom present in the system such as oscillation amplitude. In this work, we theoretically analyze and experimentally demonstrate a scheme for the alleviation of this difficulty. The scheme involves control over the laser oscillator amplitude through modification of individual laser oscillator loss. We demonstrate this approach in a laser network classical XY model simulator based on a digital degenerate cavity laser. We prove that for each XY model energy minimum there corresponds a unique set of laser loss values that leads to a network state with identical oscillation amplitudes and to phase values that coincide with the XY model minimum. We experimentally demonstrate an eight fold improvement in the deviation from the minimal XY energy by employing our proposed solution scheme.


Author(s):  
Anna Mahtani

Abstract The ex ante Pareto principle has an intuitive pull, and it has been a principle of central importance since Harsanyi’s defence of utilitarianism (to be found in e.g. Harsanyi, Rational behaviour and bargaining equilibrium in games and social situations. CUP, Cambridge, 1977). The principle has been used to criticize and refine a range of positions in welfare economics, including egalitarianism and prioritarianism. But this principle faces a serious problem. I have argued elsewhere (Mahtani, J Philos 114(6):303-323 2017) that the concept of ex ante Pareto superiority is not well defined, because its application in a choice situation concerning a fixed population can depend on how the members of that population are designated. I show in this paper that in almost all cases of policy choice, there will be numerous sets of rival designators for the same fixed population. I explore two ways that we might complete the definition of ex ante Pareto superiority. I call these the ‘supervaluationist’ reading and the ‘subvaluationist’ reading. I reject the subvaluationist reading as uncharitable, and argue that the supervaluationist reading is the most promising interpretation of the ex ante Pareto principle. I end by exploring some of the implications of this principle for prioritarianism and egalitarianism.


1980 ◽  
Vol 3 (1) ◽  
pp. 111-132 ◽  
Author(s):  
Zenon W. Pylyshyn

AbstractThe computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach (the “proprietary vocabulary hypothesis”) is that there is a natural domain of human functioning (roughly what we intuitively associate with perceiving, reasoning, and acting) that can be addressed exclusively in terms of a formal symbolic or algorithmic vocabulary or level of analysis.Much of the paper elaborates various conditions that need to be met if a literal view of mental activity as computation is to serve as the basis for explanatory theories. The coherence of such a view depends on there being a principled distinction between functions whose explanation requires that we posit internal representations and those that we can appropriately describe as merely instantiating causal physical or biological laws. In this paper the distinction is empirically grounded in a methodological criterion called the “cognitive impenetrability condition.” Functions are said to be cognitively impenetrable if they cannot be influenced by such purely cognitive factors as goals, beliefs, inferences, tacit knowledge, and so on. Such a criterion makes it possible to empirically separate the fixed capacities of mind (called its “functional architecture”) from the particular representations and algorithms used on specific occasions. In order for computational theories to avoid being ad hoc, they must deal effectively with the “degrees of freedom” problem by constraining the extent to which they can be arbitrarily adjusted post hoc to fit some particular set of observations. This in turn requires that the fixed architectural function and the algorithms be independently validated. It is argued that the architectural assumptions implicit in many contemporary models run afoul of the cognitive impenetrability condition, since the required fixed functions are demonstrably sensitive to tacit knowledge and goals. The paper concludes with some tactical suggestions for the development of computational cognitive theories.


2013 ◽  
Vol 135 (6) ◽  
Author(s):  
R. Fargère ◽  
P. Velex

A global model of mechanical transmissions is introduced which deals with most of the possible interactions between gears, shafts, and hydrodynamic journal bearings. A specific element for wide-faced gears with nonlinear time-varying mesh stiffness and tooth shape deviations is combined with shaft finite elements, whereas the bearing contributions are introduced based on the direct solution of Reynolds' equation. Because of the large bearing clearances, particular attention has been paid to the definition of the degrees-of-freedom and their datum. Solutions are derived by combining a time step integration scheme, a Newton–Raphson method, and a normal contact algorithm in such a way that the contact conditions in the bearings and on the gear teeth are simultaneously dealt with. A series of comparisons with the experimental results obtained on a test rig are given which prove that the proposed model is sound. Finally, a number of results are presented which show that parameters often discarded in global models such as the location of the oil inlet area, the oil temperature in the bearings, the clearance/elastic couplings interactions, etc. can be influential on static and dynamic tooth loading.


1995 ◽  
Vol 117 (3) ◽  
pp. 582-588 ◽  
Author(s):  
L. N. Virgin ◽  
T. F. Walsh ◽  
J. D. Knight

This paper describes the results of a study into the dynamic behavior of a magnetic bearing system. The research focuses attention on the influence of nonlinearities on the forced response of a two-degree-of-freedom rotating mass suspended by magnetic bearings and subject to rotating unbalance and feedback control. Geometric coupling between the degrees of freedom leads to a pair of nonlinear ordinary differential equations, which are then solved using both numerical simulation and approximate analytical techniques. The system exhibits a variety of interesting and somewhat unexpected phenomena including various amplitude driven bifurcational events, sensitivity to initial conditions, and the complete loss of stability associated with the escape from the potential well in which the system can be thought to be oscillating. An approximate criterion to avoid this last possibility is developed based on concepts of limiting the response of the system. The present paper may be considered as an extension to an earlier study by the same authors, which described the practical context of the work, free vibration, control aspects, and derivation of the mathematical model.


Sign in / Sign up

Export Citation Format

Share Document