scholarly journals Towards a mathematical definition of Coulomb branches of $3$-dimensional $\mathcal{N}=4$ gauge theories, I

2016 ◽  
Vol 20 (3) ◽  
pp. 595-669 ◽  
Author(s):  
Hiraku Nakajima
2011 ◽  
Vol 133 (1) ◽  
Author(s):  
Steven Turek ◽  
Sam Anand

Digital measurement devices, such as coordinate measuring machines, laser scanning devices, and digital imaging, can provide highly accurate and precise coordinate data representing the sampled surface. However, this discrete measurement process can only account for measured data points, not the entire continuous form, and is heavily influenced by the algorithm that interprets the measured data. The definition of cylindrical size for an external feature as specified by ASME Y14.5.1M-1994 [The American Society of Mechanical Engineers, 1995, Dimensioning and Tolerancing, ASME Standard Y14.5M-1994, ASME, New York, NY; The American Society of Mechanical Engineers, 1995, Mathematical Definition of Dimensioning and Tolerancing Principles, ASME Standard Y14.5.1M-1994, ASME, New York, NY] matches the analytical definition of a minimum circumscribing cylinder (MCC) when rule no. 1 [The American Society of Mechanical Engineers, 1995, Dimensioning and Tolerancing, ASME Standard Y14.5M-1994, ASME, New York, NY; The American Society of Mechanical Engineers, 1995, Mathematical Definition of Dimensioning and Tolerancing Principles, ASME Standard Y14.5.1M-1994, ASME, New York, NY] is applied to ensure a linear axis. Even though the MCC is a logical choice for size determination, it is highly sensitive to the sampling method and any uncertainties encountered in that process. Determining the least-sum-of-squares solution is an alternative method commonly utilized in size determination. However, the least-squares formulation seeks an optimal solution not based on the cylindrical size definition [The American Society of Mechanical Engineers, 1995, Dimensioning and Tolerancing, ASME Standard Y14.5M-1994, ASME, New York, NY; The American Society of Mechanical Engineers, 1995, Mathematical Definition of Dimensioning and Tolerancing Principles, ASME Standard Y14.5.1M-1994, ASME, New York, NY] and thus has been shown to be biased [Hopp, 1993, “Computational Metrology,” Manuf. Rev., 6(4), pp. 295–304; Nassef, and ElMaraghy, 1999, “Determination of Best Objective Function for Evaluating Geometric Deviations,” Int. J. Adv. Manuf. Technol., 15, pp. 90–95]. This work builds upon previous research in which the hull normal method was presented to determine the size of cylindrical bosses when rule no. 1 is applied [Turek, and Anand, 2007, “A Hull Normal Approach for Determining the Size of Cylindrical Features,” ASME, Atlanta, GA]. A thorough analysis of the hull normal method’s performance in various circumstances is presented here to validate it as a superior alternative to the least-squares and MCC solutions for size evaluation. The goal of the hull normal method is to recreate the sampled surface using computational geometry methods and to determine the cylinder’s axis and radius based upon it. Based on repetitive analyses of random samples of data from several measured parts and generated forms, it was concluded that the hull normal method outperformed all traditional solution methods. The hull normal method proved to be robust by having a lower bias and distributions that were skewed toward the true value of the radius, regardless of the amount of form error.


1992 ◽  
Vol 37 (9) ◽  
pp. 691-694 ◽  
Author(s):  
V.F. Ferrario ◽  
C. Sforza ◽  
A. Miani ◽  
A. Colombo ◽  
G. Tartaglia

Author(s):  
Mark Colyvan ◽  
Kenny Easwaran

There is general agreement in mathematics about what continuity is. In this paper we examine how well the mathematical definition lines up with common sense notions. We use a recent paper by Hud Hudson as a point of departure. Hudson argues that two objects moving continuously can coincide for all but the last moment of their histories and yet be separated in space at the end of this last moment. It turns out that Hudson’s construction does not deliver mathematically continuous motion, but the natural question then is whether there is any merit in the alternative definition of continuity that he implicitly invokes.


1976 ◽  
Vol 8 (4) ◽  
pp. 375-384 ◽  
Author(s):  
F Harary ◽  
J Rockey

In 1965 Christopher Alexander took the original step of analysing the city in graph theoretical terms and concluded that its historical or natural form is a semilattice and that urban planners of the future should adhere to this model. The idea was well received in architectural circles and has passed without serious challenge. In this paper, the value of such analysis is once again emphasized, although some of Alexander's arguments and his conclusions are refuted. Beginning with an exposition of the relationship between the graph theoretical concept of a tree, and the representation of a tree by a family of sets, we present a mathematical definition of a semilattice and discuss the ‘points’ and ‘lines’ of a graph in terms of a city, concluding that it is neither a tree nor a semilattice. This clears the ground for future graphical analysis. It seems that even general structural configurations, such as graphs or digraphs with certain specified properties, will fail to characterize a city, whose complexity, at this stage, may well continue to be understood more readily through negative rather than positive descriptions.


Author(s):  
Vincent G. Potter

This chapter deals with “continuity” as a key mathematical notion and a central philosophical category to which Peirce gives much attention. From 1880 to 1911, his attempts to give continuity a precise mathematical expression show a clear development marked by several significant changes. Four main periods may be identified: pre-Cantorian: until 1884; Cantorian: 1884–1894; Kantistic: 1895–1908; and post-Cantorian: 1908–1911. According to Peirce, the accepted mathematical definition of continuity describes an “imperfect continuum,” but the “true continuum” is something other than any metrical or even ordinal relation of elements. The true continuum has no actual element.


Author(s):  
Rohit Parikh

Church’s theorem, published in 1936, states that the set of valid formulas of first-order logic is not effectively decidable: there is no method or algorithm for deciding which formulas of first-order logic are valid. Church’s paper exhibited an undecidable combinatorial problem P and showed that P was representable in first-order logic. If first-order logic were decidable, P would also be decidable. Since P is undecidable, first-order logic must also be undecidable. Church’s theorem is a negative solution to the decision problem (Entscheidungsproblem), the problem of finding a method for deciding whether a given formula of first-order logic is valid, or satisfiable, or neither. The great contribution of Church (and, independently, Turing) was not merely to prove that there is no method but also to propose a mathematical definition of the notion of ‘effectively solvable problem’, that is, a problem solvable by means of a method or algorithm.


Sign in / Sign up

Export Citation Format

Share Document