scholarly journals A quantitative model for simply typed λ-calculus

Author(s):  
Martin Hofmann ◽  
Jérémy Ledent

Abstract We use a simplified version of the framework of resource monoids, introduced by Dal Lago and Hofmann, to interpret simply typed λ-calculus with constants zero and successor. We then use this model to prove a simple quantitative result about bounding the size of the normal form of λ-terms. While the bound itself is already known, this is to our knowledge the first semantic proof of this fact. Our use of resource monoids differs from the other instances found in the literature, in that it measures the size of λ-terms rather than time complexity.

Author(s):  
A. V. Crewe

We have become accustomed to differentiating between the scanning microscope and the conventional transmission microscope according to the resolving power which the two instruments offer. The conventional microscope is capable of a point resolution of a few angstroms and line resolutions of periodic objects of about 1Å. On the other hand, the scanning microscope, in its normal form, is not ordinarily capable of a point resolution better than 100Å. Upon examining reasons for the 100Å limitation, it becomes clear that this is based more on tradition than reason, and in particular, it is a condition imposed upon the microscope by adherence to thermal sources of electrons.


Author(s):  
Shahriar Aslani ◽  
Patrick Bernard

Abstract In the study of Hamiltonian systems on cotangent bundles, it is natural to perturb Hamiltonians by adding potentials (functions depending only on the base point). This led to the definition of Mañé genericity [ 8]: a property is generic if, given a Hamiltonian $H$, the set of potentials $g$ such that $H+g$ satisfies the property is generic. This notion is mostly used in the context of Hamiltonians that are convex in $p$, in the sense that $\partial ^2_{pp} H$ is positive definite at each point. We will also restrict our study to this situation. There is a close relation between perturbations of Hamiltonians by a small additive potential and perturbations by a positive factor close to one. Indeed, the Hamiltonians $H+g$ and $H/(1-g)$ have the same level one energy surface, hence their dynamics on this energy surface are reparametrisation of each other, this is the Maupertuis principle. This remark is particularly relevant when $H$ is homogeneous in the fibers (which corresponds to Finsler metrics) or even fiberwise quadratic (which corresponds to Riemannian metrics). In these cases, perturbations by potentials of the Hamiltonian correspond, up to parametrisation, to conformal perturbations of the metric. One of the widely studied aspects is to understand to what extent the return map associated to a periodic orbit can be modified by a small perturbation. This kind of question depends strongly on the context in which they are posed. Some of the most studied contexts are, in increasing order of difficulty, perturbations of general vector fields, perturbations of Hamiltonian systems inside the class of Hamiltonian systems, perturbations of Riemannian metrics inside the class of Riemannian metrics, and Mañé perturbations of convex Hamiltonians. It is for example well known that each vector field can be perturbed to a vector field with only hyperbolic periodic orbits, this is part of the Kupka–Smale Theorem, see [ 5, 13] (the other part of the Kupka–Smale Theorem states that the stable and unstable manifolds intersect transversally; it has also been studied in the various settings mentioned above but will not be discussed here). In the context of Hamiltonian vector fields, the statement has to be weakened, but it remains true that each Hamiltonian can be perturbed to a Hamiltonian with only non-degenerate periodic orbits (including the iterated ones), see [ 11, 12]. The same result is true in the context of Riemannian metrics: every Riemannian metric can be perturbed to a Riemannian metric with only non-degenerate closed geodesics, this is the bumpy metric theorem, see [ 1, 2, 4]. The question was investigated only much more recently in the context of Mañé perturbations of convex Hamiltonians, see [ 9, 10]. It is proved in [ 10] that the same result holds: if $H$ is a convex Hamiltonian and $a$ is a regular value of $H$, then there exist arbitrarily small potentials $g$ such that all periodic orbits (including iterated ones) of $H+g$ at energy $a$ are non-degenerate. The proof given in [ 10] is actually rather similar to the ones given in papers on the perturbations of Riemannian metrics. In all these proofs, it is very useful to work in appropriate coordinates around an orbit segment. In the Riemannian case, one can use the so-called Fermi coordinates. In the Hamiltonian case, appropriate coordinates are considered in [ 10,Lemma 3.1] itself taken from [ 3, Lemma C.1]. However, as we shall detail below, the proof of this Lemma in [ 3], Appendix C, is incomplete, and the statement itself is actually wrong. Our goal in the present paper is to state and prove a corrected version of this normal form Lemma. Our proof is different from the one outlined in [ 3], Appendix C. In particular, it is purely Hamiltonian and does not rest on the results of [ 7] on Finsler metrics, as [ 3] did. Although our normal form is weaker than the one claimed in [ 10], it is actually sufficient to prove the main results of [ 6, 10], as we shall explain after the statement of Theorem 1, and probably also of the other works using [ 3, Lemma C.1].


2020 ◽  
Vol 37 (06) ◽  
pp. 2050034
Author(s):  
Ali Reza Sepasian ◽  
Javad Tayyebi

This paper studies two types of reverse 1-center problems under uniform linear cost function where edge lengths are allowed to reduce. In the first type, the aim is that the objective value is bounded by a prescribed fixed value [Formula: see text] at minimum cost. The aim of the other is to improve the objective value as much as possible within a given budget. An algorithm based on dynamic programming is proposed to solve the first problem in linear time. Then, this algorithm is applied as a subroutine to design an algorithm to solve the second type of the problem in [Formula: see text] time in which [Formula: see text] is a fixed number dependent on the problem parameters. Under the similarity assumption, this algorithm has a better complexity than the Nguyen algorithm (2013) with quadratic-time complexity. Some numerical experiments are conducted to validate this fact in practice.


2014 ◽  
Vol 6 (2) ◽  
pp. 190-205
Author(s):  
Tibor Gregorics

Abstract The A** algorithm is a famous heuristic path-finding algorithm. In this paper its different definitions will be analyzed firstly. Then its memory complexity is going to be investigated. On the one hand, the well-known concept of better-information will be extended to compare the different heuristics in the A** algorithm. On the other hand, a new proof will be given to show that there is no deterministic graph-search algorithm having better memory complexity than A**∗. At last the time complexity of A** will be discussed.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 211 ◽  
Author(s):  
Pierluigi Crescenzi ◽  
Clémence Magnien ◽  
Andrea Marino

The harmonic closeness centrality measure associates, to each node of a graph, the average of the inverse of its distances from all the other nodes (by assuming that unreachable nodes are at infinite distance). This notion has been adapted to temporal graphs (that is, graphs in which edges can appear and disappear during time) and in this paper we address the question of finding the top-k nodes for this metric. Computing the temporal closeness for one node can be done in O(m) time, where m is the number of temporal edges. Therefore computing exactly the closeness for all nodes, in order to find the ones with top closeness, would require O(nm) time, where n is the number of nodes. This time complexity is intractable for large temporal graphs. Instead, we show how this measure can be efficiently approximated by using a “backward” temporal breadth-first search algorithm and a classical sampling technique. Our experimental results show that the approximation is excellent for nodes with high closeness, allowing us to detect them in practice in a fraction of the time needed for computing the exact closeness of all nodes. We validate our approach with an extensive set of experiments.


2019 ◽  
Vol 67 (3-4) ◽  
pp. 185-195
Author(s):  
Kazuhiro Ohnishi

Which choice will a player make if he can make one of two choices in which his own payoffs are equal, but his rival’s payoffs are not equal, that is, one with a large payoff for his rival and the other with a small payoff for his rival? This paper introduces non-altruistic equilibria for normal-form games and extensive-form non-altruistic equilibria for extensive-form games as equilibrium concepts of non-cooperative games by discussing such a problem and examines the connections between their equilibrium concepts and Nash and subgame perfect equilibria that are important and frequently encountered equilibrium concepts.


2010 ◽  
Vol 23 (2) ◽  
pp. 33-52 ◽  
Author(s):  
Sanjay Goel ◽  
Eitel J.M. Lauría

In this paper, the authors present a quantitative model for estimating security risk exposure for a firm. The model includes a formulation for the optimization of controls as well as determining sensitivity of the exposure of assets to different threats. The model uses a series of matrices to organize the data as groups of assets, vulnerabilities, threats, and controls. The matrices are then linked such that data is aggregated in each matrix and cascaded across the other matrices. The computations are reversible and transparent allowing analysts to answer what-if questions on the data. The exposure formulation is based on the Annualized Loss Expectancy (ALE) model, and uncertainties in the data are captured via Monte Carlo simulation. A mock case study based on a government agency is used to illustrate this methodology.


Algorithms ◽  
2019 ◽  
Vol 12 (6) ◽  
pp. 124
Author(s):  
Sukhpal Ghuman ◽  
Emanuele Giaquinta ◽  
Jorma Tarhio

We present two modifications of Duval’s algorithm for computing the Lyndon factorization of a string. One of the algorithms has been designed for strings containing runs of the smallest character. It works best for small alphabets and it is able to skip a significant number of characters of the string. Moreover, it can be engineered to have linear time complexity in the worst case. When there is a run-length encoded string R of length ρ , the other algorithm computes the Lyndon factorization of R in O ( ρ ) time and in constant space. It is shown by experimental results that the new variations are faster than Duval’s original algorithm in many scenarios.


Author(s):  
Xueyong Yu ◽  
◽  
Weiran Lin ◽  
Jinling Wei ◽  
Shuoping Wang ◽  
...  

We developed two models in this study: one to show the distribution of heat for pans of different shapes, and the other to select the best type of pan to maximize the number of pans that can fit in the oven and to maximize even heat distribution in the pans. We constructed a model of heat distribution. The uneven distribution of heat is mainly caused by heat conduction. We established a differential equation for heat conduction according to Fourier’s law. The finite-difference method and Gauss-Seidel iteration were used to solve the equation, and MATLAB was used to draw the corresponding heat-distribution chart. We built a quantitative model of the shape optimization with an evaluation equation. Using the permutation and combination method, we calculated the maximum number of pans and the utilization rate of area. Finally, we determined that the optimal pan type is a round square, which achieved the best state.


2010 ◽  
Vol 44-47 ◽  
pp. 3365-3369
Author(s):  
Heng Wu Li

Pseudoknots have generally been excluded from the prediction of RNA secondary structures due to its difficulty in modeling. Here we present an algorithm, dynamic iterated matching to predict RNA secondary structures including pseudoknots with O(n4) time. The method can utilize either thermodynamic or comparative information or both, thus is able to predict pseudoknots for both aligned and individual sequences. We have tested the algorithm on a number of RNA families. Comparisons show that our algorithm and loop matching method has similar accuracy and time complexity, and are more sensitive than the maximum weighted matching method and Rivas algorithm. Among the four methods, our algorithm has the best prediction specificity. The results show that our algorithm is more reliable and efficient than the other methods.


Sign in / Sign up

Export Citation Format

Share Document