Probabilities over rich languages, testing and randomness

1982 ◽  
Vol 47 (3) ◽  
pp. 495-548 ◽  
Author(s):  
Haim Gaifman ◽  
Marc Snir

The basic concept underlying probability theory and statistics is a function assigning numerical values (probabilities) to events. An “event” in this context is any conceivable state of affairs including the so-called “empty event”—an a priori impossible state. Informally, events are described in everyday language (e.g. “by playing this strategy I shall win $1000 before going broke”). But in the current mathematical framework (first proposed by Kolmogoroff [Ko 1]) they are identified with subsets of some all-inclusive set Q. The family of all events constitutes a field, or σ-field, and the logical connectives ‘and’, ‘or’ and ‘not’ are translated into the set-theoretical operations of intersection, union and complementation. The points of Q can be regarded as possible worlds and an event as the set of all worlds in which it takes place. The concept of a field of sets is wide enough to accommodate all cases and to allow for a general abstract foundation of the theory. On the other hand it does not reflect distinctions that arise out of the linguistic structure which goes into the description of our events. Since events are always described in some language they can be indentified with the sentences that describe them and the probability function can be regarded as an assignment of values to sentences. The extensive accumulated knowledge concerning formal languages makes such a project feasible. The study of probability functions defined over the sentences of a rich enough formal language yields interesting insights in more than one direction.Our present approach is not an alternative to the accepted Kolmogoroff axiomatics. In fact, given some formal language L, we can consider a rich enough set, say Q, of models for L (called also in this work “worlds”) and we can associate with every sentence the set of all worlds in Q in which the sentence is true. Thus our probabilities can be considered also as measures over some field of sets. But the introduction of the language adds mathematical structure and makes for distinctions expressing basic intuitions that cannot be otherwise expressed. As an example we mention here the concept of a random sequence or, more generally, a random world, or a world which is typical to a certain probability distribution.

Author(s):  
John L. Pollock

Much of the usefulness of probability derives from its rich logical and mathematical structure. That structure comprises the probability calculus. The classical probability calculus is familiar and well understood, but it will turn out that the calculus of nomic probabilities differs from the classical probability calculus in some interesting and important respects. The purpose of this chapter is to develop the calculus of nomic probabilities, and at the same time to investigate the logical and mathematical structure of nomic generalizations. The mathematical theory of nomic probability is formulated in terms of possible worlds. Possible worlds can be regarded as maximally specific possible ways things could have been. This notion can be filled out in various ways, but the details are not important for present purposes. I assume that a proposition is necessarily true iff it is true at all possible worlds, and I assume that the modal logic of necessary truth and necessary exemplification is a quantified version of S5. States of affairs are things like Mary’s baking pies, 2 being the square root of 4, Martha’s being smarter than John, and the like. For present purposes, a state of affairs can be identified with the set of all possible worlds at which it obtains. Thus if P is a state of affairs and w is a possible world, P obtains at w iff w∊P. Similarly, we can regard monadic properties as sets of ordered pairs ⧼w,x⧽ of possible worlds and possible objects. For example, the property of being red is the set of all pairs ⧼w,x⧽ such that w is a possible world and x is red at w. More generally, an n-place property will be taken to be a set of (n+l)-tuples ⧼w,x1...,xn⧽. Given any n-place concept α, the corresponding property of exemplifying a is the set of (n + l)-tuples ⧼w,x1,...,xn⧽ such that x1,...,xn exemplify α at the possible world w. States of affairs and properties can be constructed out of one another using logical operators like conjunction, negation, quantification, and so on.


Author(s):  
Paolo Dulio ◽  
Andrea Frosini ◽  
Simone Rinaldi ◽  
Lama Tarsissi ◽  
Laurent Vuillon

AbstractA remarkable family of discrete sets which has recently attracted the attention of the discrete geometry community is the family of convex polyominoes, that are the discrete counterpart of Euclidean convex sets, and combine the constraints of convexity and connectedness. In this paper we study the problem of their reconstruction from orthogonal projections, relying on the approach defined by Barcucci et al. (Theor Comput Sci 155(2):321–347, 1996). In particular, during the reconstruction process it may be necessary to expand a convex subset of the interior part of the polyomino, say the polyomino kernel, by adding points at specific positions of its contour, without losing its convexity. To reach this goal we consider convexity in terms of certain combinatorial properties of the boundary word encoding the polyomino. So, we first show some conditions that allow us to extend the kernel maintaining the convexity. Then, we provide examples where the addition of one or two points causes a loss of convexity, which can be restored by adding other points, whose number and positions cannot be determined a priori.


Author(s):  
Colin McGinn

This chapter explores philosophical issues in metaphysics. It begins by distinguishing between de re and de dicto necessity. All necessity is uniformly de re; there is simply no such thing as de dicto necessity. Indeed, in the glory days of positivism, all necessity was understood as uniformly the same: a necessary truth was always an a priori truth, while contingent truths were always a posteriori. The chapter then assesses the concept of antirealism. Antirealism is always an error theory: there is some sort of mistake or distortion or sloppiness embedded in the usual discourse. The chapter also considers paradoxes, causation, conceptual analysis, scientific mysteries, the possible worlds theory of modality, the concept of a person, the nature of existence, and logic and propositions.


Author(s):  
Paul J. du Plessis

This chapter is devoted to the Roman law of persons and family. As in modern legal studies, so in Roman law, it is the first branch of private law that students are taught, primarily in order to understand the concept of ‘legal personhood’. This chapter covers the paterfamilias (head of the household); marriage and divorce; adoption; and guardianship. The head of the household was the eldest living male ancestor of a specific family. He had in his power (potestas) all descendants traced through the male line (and also exercised forms of control over other members of the household). Roman law accorded the head of the household extensive legal entitlements, not only vis-à-vis the members of the household, but also its property. The motivation of this state of affairs lies in the recognition in Roman law of the family unit as legally significant entity.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 799 ◽  
Author(s):  
Won-Kwang Park

It is well-known that subspace migration is a stable and effective non-iterative imaging technique in inverse scattering problem. However, for a proper application, a priori information of the shape of target must be estimated. Without this consideration, one cannot retrieve good results via subspace migration. In this paper, we identify the mathematical structure of single- and multi-frequency subspace migration without any a priori of unknown targets and explore its certain properties. This is based on the fact that elements of so-called multi-static response (MSR) matrix can be represented as an asymptotic expansion formula. Furthermore, based on the examined structure, we improve subspace migration and consider the multi-frequency subspace migration. Various results of numerical simulation with noisy data support our investigation.


2019 ◽  
Vol 20 (01) ◽  
pp. 1950014
Author(s):  
Noam Greenberg ◽  
Joseph S. Miller ◽  
André Nies

We study the sets that are computable from both halves of some (Martin–Löf) random sequence, which we call [Formula: see text]-bases. We show that the collection of such sets forms an ideal in the Turing degrees that is generated by its c.e. elements. It is a proper subideal of the [Formula: see text]-trivial sets. We characterize [Formula: see text]-bases as the sets computable from both halves of Chaitin’s [Formula: see text], and as the sets that obey the cost function [Formula: see text]. Generalizing these results yields a dense hierarchy of subideals in the [Formula: see text]-trivial degrees: For [Formula: see text], let [Formula: see text] be the collection of sets that are below any [Formula: see text] out of [Formula: see text] columns of some random sequence. As before, this is an ideal generated by its c.e. elements and the random sequence in the definition can always be taken to be [Formula: see text]. Furthermore, the corresponding cost function characterization reveals that [Formula: see text] is independent of the particular representation of the rational [Formula: see text], and that [Formula: see text] is properly contained in [Formula: see text] for rational numbers [Formula: see text]. These results are proved using a generalization of the Loomis–Whitney inequality, which bounds the measure of an open set in terms of the measures of its projections. The generality allows us to analyze arbitrary families of orthogonal projections. As it turns out, these do not give us new subideals of the [Formula: see text]-trivial sets; we can calculate from the family which [Formula: see text] it characterizes. We finish by studying the union of [Formula: see text] for [Formula: see text]; we prove that this ideal consists of the sets that are robustly computable from some random sequence. This class was previously studied by Hirschfeldt [D. R. Hirschfeldt, C. G. Jockusch, R. Kuyper and P. E. Schupp, Coarse reducibility and algorithmic randomness, J. Symbolic Logic 81(3) (2016) 1028–1046], who showed that it is a proper subclass of the [Formula: see text]-trivial sets. We prove that all such sets are robustly computable from [Formula: see text], and that they form a proper subideal of the sets computable from every (weakly) LR-hard random sequence. We also show that the ideal cannot be characterized by a cost function, giving the first such example of a [Formula: see text] subideal of the [Formula: see text]-trivial sets.


Author(s):  
Alvaro J. Rojas Arciniegas ◽  
Harrison M. Kim

Multiple factors affect the decisions of selecting the appropriate components to share in product family design. Some of the challenges that the designers face are maintaining uniqueness and the desired performance in each variant while taking advantage of a common structure. In this paper, the sharing decision making process is analyzed for the case when a firm knows a priori that some of the components contain sensitive information that could be exposed to the user, third-party manufacturers, or undesired agents; thence, it is important to enclose it and protect it. Two important aspects to consider are defining the architecture of the product while protecting the sensitive information. This paper proposes tools to help the designers to identify components that are candidates for sharing among the family and finds the most desirable component arrangement that facilitates sharing while protecting the sensitive information that has been previously identified. The proposed framework is applied to three printers in which the architecture used for the ink cartridges and printheads are significantly different. Third-party manufacturers and remanufacturers offer their own alternatives for these subsystems (ink cartridges and printheads) since the customer for printer supplies is always looking for a cheaper alternative; meanwhile, the OEMs attempt to secure their products and retain their customers with original supplies. Having identified the sensitive components for each printer, the optimal clustering strategy is found, as well as the set of components that are candidates for sharing, according to their connectivity and the security considerations.


1991 ◽  
Vol 56 (1) ◽  
pp. 276-294 ◽  
Author(s):  
Arnon Avron

Many-valued logics in general and 3-valued logic in particular is an old subject which had its beginning in the work of Łukasiewicz [Łuk]. Recently there is a revived interest in this topic, both for its own sake (see, for example, [Ho]), and also because of its potential applications in several areas of computer science, such as proving correctness of programs [Jo], knowledge bases [CP] and artificial intelligence [Tu]. There are, however, a huge number of 3-valued systems which logicians have studied throughout the years. The motivation behind them and their properties are not always clear, and their proof theory is frequently not well developed. This state of affairs makes both the use of 3-valued logics and doing fruitful research on them rather difficult.Our first goal in this work is, accordingly, to identify and characterize a class of 3-valued logics which might be called natural. For this we use the general framework for characterizing and investigating logics which we have developed in [Av1]. Not many 3-valued logics appear as natural within this framework, but it turns out that those that do include some of the best known ones. These include the 3-valued logics of Łukasiewicz, Kleene and Sobociński, the logic LPF used in the VDM project, the logic RM3 from the relevance family and the paraconsistent 3-valued logic of [dCA]. Our presentation provides justifications for the introduction of certain connectives in these logics which are often regarded as ad hoc. It also shows that they are all closely related to each other. It is shown, for example, that Łukasiewicz 3-valued logic and RM3 (the strongest logic in the family of relevance logics) are in a strong sense dual to each other, and that both are derivable by the same general construction from, respectively, Kleene 3-valued logic and the 3-valued paraconsistent logic.


2016 ◽  
Vol 10 (01) ◽  
pp. 1750006
Author(s):  
Shaurya Jauhari ◽  
S. A. M. Rizvi

Various algorithms have been devised to mathematically model the dynamic mechanism of the gene expression data. Gillespie’s stochastic simulation (GSSA) has been exceptionally primal for chemical reaction synthesis with future ameliorations. Several other mathematical techniques such as differential equations, thermodynamic models and Boolean models have been implemented to optimally and effectively represent the gene functioning. We present a novel mathematical framework of gene expression, undertaking the mathematical modeling of the transcription and translation phases, which is a detour from conventional modeling approaches. These subprocesses are inherent to every gene expression, which is implicitly an experimental outcome. As we foresee, there can be modeled a generality about some basal translation or transcription values that correspond to a particular assay.


Sign in / Sign up

Export Citation Format

Share Document