The model of set theory generated by countably many generic reals

1981 ◽  
Vol 46 (4) ◽  
pp. 732-752 ◽  
Author(s):  
Andreas Blass

AbstractAdjoin, to a countable standard model M of Zermelo-Fraenkel set theory (ZF), a countable set A of independent Cohen generic reals. If one attempts to construct the model generated over M by these reals (not necessarily containing A as an element) as the intersection of all standard models that include M ∪ A, the resulting model fails to satisfy the power set axiom, although it does satisfy all the other ZF axioms. Thus, there is no smallest ZF model including M ∪ A, but there are minimal such models. These are classified by their sets of reals, and there is one minimal model whose set of reals is the smallest possible. We give several characterizations of this model, we determine which weak axioms of choice it satisfies, and we show that some better known models are forcing extensions of it.

1975 ◽  
Vol 40 (2) ◽  
pp. 167-170
Author(s):  
George Metakides ◽  
J. M. Plotkin

The following is a classical result:Theorem 1.1. A complete atomic Boolean algebra is isomorphic to a power set algebra [2, p. 70].One of the consequences of [3] is: If M is a countable standard model of ZF and is a countable (in M) model of a complete ℵ0-categorical theory T, then there is a countable standard model N of ZF and a Λ ∈ N such that the Boolean algebra of definable (in T with parameters from ) subsets of is isomorphic to the power set algebra of Λ in N. In particular if and T the theory of equality with additional axioms asserting the existence of at least n distinct elements for each n < ω, then there is an N and Λ ∈ N with 〈PN(Λ), ⊆〉 isomorphic to the countable, atomic, incomplete Boolean algebra of the finite and cofinite subsets of ω.From the above we see that some incomplete Boolean algebras can be realized as power sets in standard models of ZF.Definition 1.1. A countable Boolean algebra 〈B, ≤〉 is a pseudo-power set if there is a countable standard model of ZF, N and a set Λ ∈ N such thatIt is clear that a pseudo-power set is atomic.


2016 ◽  
Vol 81 (2) ◽  
pp. 605-628 ◽  
Author(s):  
SEAN WALSH

AbstractFrege’sGrundgesetzewas one of the 19th century forerunners to contemporary set theory which was plagued by the Russell paradox. In recent years, it has been shown that subsystems of theGrundgesetzeformed by restricting the comprehension schema are consistent. One aim of this paper is to ascertain how much set theory can be developed within these consistent fragments of theGrundgesetze, and our main theorem (Theorem 2.9) shows that there is a model of a fragment of theGrundgesetzewhich defines a model of all the axioms of Zermelo–Fraenkel set theory with the exception of the power set axiom. The proof of this result appeals to Gödel’s constructible universe of sets and to Kripke and Platek’s idea of the projectum, as well as to a weak version of uniformization (which does not involve knowledge of Jensen’s fine structure theory). The axioms of theGrundgesetzeare examples ofabstraction principles, and the other primary aim of this paper is to articulate a sufficient condition for the consistency of abstraction principles with limited amounts of comprehension (Theorem 3.5). As an application, we resolve an analogue of the joint consistency problem in the predicative setting.


1995 ◽  
Vol 60 (2) ◽  
pp. 512-516 ◽  
Author(s):  
James H. Schmerl

The κ-isomorphism property (IPκ) for nonstandard universes was introduced by Henson in [4]. There has been some recent effort aimed at more fully understanding this property. Jin and Shelah in [7] have shown that for κ < ⊐ω, IPκ is equivalent to what we will refer to as the κ-resplendence property. Earlier, in [6], Jin asked if IPκ is equivalent to IPℵ0 plus κ-saturation. He answered this question positively for κ = ℵ1. In this note we extend this answer to all κ. We also extend the result of Jin and Shelah to all κ. (Jin also observed this could be done.)In order to strike a balance between the generalities of model theory and the specifics of nonstandard analysis, we will consider models of Zermelo set theory with the Axiom of Choice; we denote this theory by ZC. The axioms of ZC are just those of ZFC but without the replacement scheme. Thus, among the axioms of ZC are the power set axiom, the infinity axiom, the separation axioms and the axiom of choice.Let(V, E) ⊨ ZC. If a ∈ V, we let *a = {x ∈ V: (V,E) ⊨ x ∈ a}. In particular, i ∈ *ω iff i ∈ V and (V,E) ⊨ (i is a natural number). A subset A ⊆ V is internal if A = *a for some a ∈ V.The standard model of ZC consists of those sets of rank at most ω + ω. In other words, if we let V0 be the set of hereditarily finite sets and for n < ω, then (Vω, ∈) is the standard model of ZC, where Vω = ⋃n<ω. Vn.


Author(s):  
Michael Potter

The various attitudes that have been taken to mathematics can be split into two camps according to whether they take mathematical theorems to be true or not. Mathematicians themselves often label the former camp realist and the latter formalist. (Philosophers, on the other hand, use both these labels for more specific positions within the two camps.) Formalists have no special difficulty with set theory as opposed to any other branch of mathematics; for that reason we shall not consider their view further here. For realists, on the other hand, set theory is peculiarly intractable: it is very difficult to give an unproblematic explanation of its subject matter. The reason this difficulty is not of purely local interest is an after effect of logicism. Logicism, in the form in which Frege and Russell tried to implement it, was a two-stage project. The first stage was to embed arithmetic (Frege) or, more ambitiously, the whole of mathematics (Russell) in the theory of sets; the second was to embed this in turn in logic. The hope was that this would palm off all the philosophical problems of mathematics onto logic. The second stage is generally agreed to have failed: set theory is not part of logic. But the first stage succeeded: almost all of mathematics can be embedded in set theory. So the logicist aim of explaining mathematics in terms of logic metamorphoses into one of explaining it in terms of set theory. Various systems of set theory are available, and for most of mathematics the method of embedding is fairly insensitive to the exact system that we choose. The main exceptions to this are category theory, whose embedding is awkward if the theory chosen does not distinguish between sets and proper classes; and the theory of sets of real numbers, where there are a few arguments that depend on very strong axioms of infinity (also known as large cardinal axioms) not present in some of the standard axiomatizations of set theory. All the systems agree that sets are extensional entities, so that they satisfy the axiom of extensionality: ∀x(xЄa ≡ xЄb) → a=b. What differs between the systems is which sets they take to exist. A property F is said to be set-forming if {x:Fx} exists: the issue to be settled is which properties are set-forming and which are not. What the philosophy of set theory has to do is to provide an illuminating explanation for the various cases of existence. The most popular explanation nowadays is the so-called iterative conception of set. This conceives of sets as arranged in a hierarchy of stages (sometimes known as levels). The bottom level is a set whose members are the non-set-theoretic entities (sometimes known as Urelemente) to which the theory is intended to be applicable. (This set is often taken by mathematicians to be empty, thus restricting attention to what are known as pure sets, although this runs the danger of cutting set theory off from its intended application.) Each succeeding level is then obtained by forming the power set of the preceding one. For this conception three questions are salient: Why should there not be any sets other than these? How rich is the power-set operation? How many levels are there? An alternative explanation which was for a time popular among mathematicians is limitation of size. This is the idea that a property is set-forming provided that there are not too many objects satisfying it. How many is too many is open to debate. In order to prevent the system from being contradictory, we need only insist that the universe is too large to form a set, but this is not very informative in itself: we also need to be told how large the universe is.


1988 ◽  
Vol 2 (3) ◽  
pp. 45-50 ◽  
Author(s):  
Hayne Leland ◽  
Mark Rubinstein

Six months after the market crash of October 1987, we are still sifting through the debris searching for its cause. Two theories of the crash sound plausible -- one based on a market panic and the other based on large trader transactions -- though there is other evidence that is difficult to reconcile. If we are to believe the market panic theory or the Brady Commission's theory that the crash was primarily caused by a few large traders, we must strongly reject the standard model. We need to build models of financial equilibrium which are more sensitive to real life trading mechanisms, which account more realistically for the formation of expectations, and which recognize that, at any one time, there is a limited pool of investors available with the ability to evaluate stocks and take appropriate action in the market.


1965 ◽  
Vol 30 (1) ◽  
pp. 1-7 ◽  
Author(s):  
Gaisi Takeuti

In this paper, by a function of ordinals we understand a function which is defined for all ordinals and each of whose value is an ordinal. In [7] (also cf. [8] or [9]) we defined recursive functions and predicates of ordinals, following Kleene's definition on natural numbers. A predicate will be called arithmetical, if it is obtained from a recursive predicate by prefixing a sequence of alternating quantifiers. A function will be called arithmetical, if its representing predicate is arithmetical.The cardinals are identified with those ordinals a which have larger power than all smaller ordinals than a. For any given ordinal a, we denote by the cardinal of a and by 2a the cardinal which is of the same power as the power set of a. Let χ be the function such that χ(a) is the least cardinal which is greater than a.Now there are functions of ordinals such that they are easily defined in set theory, but it seems impossible to define them as arithmetical ones; χ is such a function. If we define χ in making use of only the language on ordinals, it seems necessary to use the notion of all the functions from ordinals, e.g., as in [6].


1996 ◽  
Vol 118 (1) ◽  
pp. 121-124 ◽  
Author(s):  
S. Quin ◽  
G. E. O. Widera

Of the quantitative approaches applied to inservice inspection, failure modes, effects,criticality analysis (FMECA) methodology is recommended. FMECA can provide a straightforward illustration of how risk can be used to prioritize components for inspection (ASME, 1991). But, at present, it has two limitations. One is that it cannot be used in the situation where components have multiple failure modes. The other is that it cannot be used in the situation where the uncertainties in the data of components have nonuniform distributions. In engineering practice, these two situations exist in many cases. In this paper, two methods based on fuzzy set theory are presented to treat these problems. The methods proposed here can be considered as a supplement to FMECA, thus extending its range of applicability.


2020 ◽  
Vol 2020 (3) ◽  
Author(s):  
Junichi Haruna ◽  
Hikaru Kawai

Abstract In the standard model, the weak scale is the only parameter with mass dimensions. This means that the standard model itself cannot explain the origin of the weak scale. On the other hand, from the results of recent accelerator experiments, except for some small corrections, the standard model has increased the possibility of being an effective theory up to the Planck scale. From these facts, it is naturally inferred that the weak scale is determined by some dynamics from the Planck scale. In order to answer this question, we rely on the multiple point criticality principle as a clue and consider the classically conformal $\mathbb{Z}_2\times \mathbb{Z}_2$ invariant two-scalar model as a minimal model in which the weak scale is generated dynamically from the Planck scale. This model contains only two real scalar fields and does not contain any fermions or gauge fields. In this model, due to a Coleman–Weinberg-like mechanism, the one-scalar field spontaneously breaks the $ \mathbb{Z}_2$ symmetry with a vacuum expectation value connected with the cutoff momentum. We investigate this using the one-loop effective potential, renormalization group and large-$N$ limit. We also investigate whether it is possible to reproduce the mass term and vacuum expectation value of the Higgs field by coupling this model with the standard model in the Higgs portal framework. In this case, the one-scalar field that does not break $\mathbb{Z}_2$ can be a candidate for dark matter and have a mass of about several TeV in appropriate parameters. On the other hand, the other scalar field breaks $\mathbb{Z}_2$ and has a mass of several tens of GeV. These results will be verifiable in near-future experiments.


2018 ◽  
Vol 8 ◽  
pp. 258-262
Author(s):  
Kamil Zdanikowski ◽  
Beata Pańczyk

The article presents hosting models comparison of ASP.NET Core application. Available hosting models were described and compared and then performance comparison was carried out. For each model the same test scenarios were executed and their performance was determined by number of requests per second which host was able to process. The results obtained show that standard model is the least efficient one and using one of the other configurations, for example, IIS with Kestrel (in-process), Kestrel or HTTP.sys might provide even several times better performance compared to standard model.


Sign in / Sign up

Export Citation Format

Share Document