scholarly journals Teorias da Aleatoriedade

2004 ◽  
Vol 11 (2) ◽  
pp. 75-98
Author(s):  
Carlos A. P. Campani ◽  
Paulo Blauth Menezes

This work is a survey about the definition of “random sequence”. We emphasize the definition of Martin-Löf and the definition based on incompressibility (Kolmogorov complexity). Kolmogorov complexity is a profound and sofisticated theory of information and randomness based on Turing machines. These two definitions solve all the problems of the other approaches, satisfying our intuitive concept of randomness, and both are mathematically correct. Furthermore, we show the Schnorr’s approach, that includes a requisite of effectiveness (computability) in his definition. We show the relations between all definitions in a critical way. Keywords: randomness, Kolmogorov complexity, Turing machine, computability, probability.

1987 ◽  
Vol 52 (3) ◽  
pp. 725-755 ◽  
Author(s):  
Michiel van Lambalgen

AbstractWe review briefly the attempts to define random sequences (§0). These attempts suggest two theorems: one concerning the number of subsequence selection procedures that transform a random sequence into a random sequence (§§1–3 and 5); the other concerning the relationship between definitions of randomness based on subsequence selection and those based on statistical tests (§4).


Author(s):  
Antony Eagle

Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a property of a sequence of outcomes, and the latter as a property of the process generating those outcomes. A discussion follows of the nature and limits of the relationship between the two notions, with largely negative verdicts on the prospects for any reduction of one to the other, although the existence of an apparently random sequence of outcomes is good evidence for the involvement of a genuinely chancy process.


2020 ◽  
Vol 63 (1) ◽  
pp. 53-67
Author(s):  
Paula Quinon

AbstractThe core of the problem discussed in this paper is the following: the Church-Turing Thesis states that Turing Machines formally explicate the intuitive concept of computability. The description of Turing Machines requires description of the notation used for the input and for the output. Providing a general definition of notations acceptable in the process of computations causes problems. This is because a notation, or an encoding suitable for a computation, has to be computable. Yet, using the concept of computation, in a definition of a notation, which will be further used in a definition of the concept of computation yields an obvious vicious circle. The circularity of this definition causes trouble in distinguishing on the theoretical level, what is an acceptable notation from what is not an acceptable notation, or as it is usually referred to in the literature, “deviant encodings”.Deviant encodings appear explicitly in discussions about what is an adequate or correct conceptual analysis of the concept of computation. In this paper, I focus on philosophical examples where the phenomenon appears implicitly, in a “disguised” version. In particular, I present its use in the analysis of the concept of natural number. I also point at additional phenomena related to deviant encodings: conceptual fixed points and apparent “computability” of uncomputable functions. In parallel, I develop the idea that Carnapian explications provide a much more adequate framework for understanding the concept of computation, than the classical philosophical analysis.


Author(s):  
Roger Penrose ◽  
Martin Gardner

What Precisely is an algorithm, or a Turing machine, or a universal Turing machine? Why should these concepts be so central to the modern view of what could constitute a ‘thinking device’? Are there any absolute limitations to what an algorithm could in principle achieve? In order to address these questions adequately, we shall need to examine the idea of an algorithm and of Turing machines in some detail. In the various discussions which follow, I shall sometimes need to refer to mathematical expressions. I appreciate that some readers may be put off by such things, or perhaps find them intimidating. If you are such a reader, I ask your indulgence, and recommend that you follow the advice I have given in my ‘Note to the reader’ on p. viii! The arguments given here do not require mathematical knowledge beyond that of elementary school, but to follow them in detail, some serious thought would be required. In fact, most of the descriptions are quite explicit, and a good understanding can be obtained by following the details. But much can also be gained even if one simply skims over the arguments in order to obtain merely their flavour. If, on the other hand, you are an expert, I again ask your indulgence. I suspect that it may still be worth your while to look through what I have to say, and there may indeed be a thing or two to catch your interest. The word ‘algorithm’ comes from the name of the ninth century Persian mathematician Abu Ja’far Mohammed ibn Mûsâ alKhowârizm who wrote an influential mathematical textbook, in about 825 AD, entitled ‘Kitab al-jabr wa’l-muqabala’. The way that the name ‘algorithm’ has now come to be spelt, rather than the earlier and more accurate ‘algorism’, seems to have been due to an association with the word ‘arithmetic’. (It is noteworthy, also, that the word ‘algebra’ comes from the Arabic ‘al-jabr’ appearing in the title of his book.) Instances of algorithms were, however, known very much earlier than al-Khowârizm’s book.


Author(s):  
Songsong Dai

In this paper, we give a definition for quantum information distance. In the classical setting, information distance between two classical strings is developed based on classical Kolmogorov complexity. It is defined as the length of a shortest transition program between these two strings in a universal Turing machine. We define the quantum information distance based on Berthiaume et al.’s quantum Kolmogorov complexity. The quantum information distance between qubit strings is defined as the length of the shortest quantum transition program between these two qubit strings in a universal quantum Turing machine. We show that our definition of quantum information distance is invariant under the choice of the underlying quantum Turing machine.


2002 ◽  
Vol 13 (04) ◽  
pp. 587-612 ◽  
Author(s):  
JÜRGEN SCHMIDHUBER

The traditional theory of Kolmogorov complexity and algorithmic probability focuses on monotone Turing machines with one-way write-only output tape. This naturally leads to the universal enumerable Solomonoff-Levin measure. Here we introduce more general, nonenumerable but cumulatively enumerable measures (CEMs) derived from Turing machines with lexicographically nondecreasing output and random input, and even more general approximable measures and distributions computable in the limit. We obtain a natural hierarchy of generalizations of algorithmic probability and Kolmogorov complexity, suggesting that the "true" information content of some (possibly infinite) bitstring x is the size of the shortest nonhalting program that converges to x and nothing but x on a Turing machine that can edit its previous outputs. Among other things we show that there are objects computable in the limit yet more random than Chaitin's "number of wisdom" Omega, that any approximable measure of x is small for any x lacking a short description, that there is no universal approximable distribution, that there is a universal CEM, and that any nonenumerable CEM of x is small for any x lacking a short enumerating program. We briefly mention consequences for universes sampled from such priors.


Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 304
Author(s):  
Florin Manea

In this paper we propose and analyse from the computational complexity point of view several new variants of nondeterministic Turing machines. In the first such variant, a machine accepts a given input word if and only if one of its shortest possible computations on that word is accepting; on the other hand, the machine rejects the input word when all the shortest computations performed by the machine on that word are rejecting. We are able to show that the class of languages decided in polynomial time by such machines is PNP[log]. When we consider machines that decide a word according to the decision taken by the lexicographically first shortest computation, we obtain a new characterization of PNP. A series of other ways of deciding a language with respect to the shortest computations of a Turing machine are also discussed.


Author(s):  
Marco Giunti

The definition of a computational system that I proposed in chapter 1 (definition 3) employs the concept of Turing computability. In this chapter, however, I will show that this concept is not absolute, but instead depends on the relational structure of the support on which Turing machines operate. Ordinary Turing machines operate on a linear tape divided into a countably infinite number of adjacent squares. But one can also think of Turing machines that operate on different supports. For example, we can let a Turing machine work on an infinite checkerboard or, more generally, on some n-dimensional infinite array. I call an arbitrary support on which a Turing machine can operate a pattern field. Depending on the pattern field F we choose, we in fact obtain different concepts of computability. At the end of this chapter (section 6), I will thus propose a new definition of a computational system (a computational system on pattern field F) that takes into account the relativity of the concept of Turing computability. If F is a doubly infinite tape, however, computational systems on F reduce to computational systems. Turing (1965) presented his machines as an idealization of a human being that transforms symbols by means of a specified set of rules. Turing based his analysis on four hypotheses: 1. The capacity to recognize, transform, and memorize symbols and rules is finite. It thus follows that any transformation of a complex symbol must always be reduced to a series of simpler transformations. These operations on elementary symbols are of three types: recognizing a symbol, replacing a symbol, and shifting the attention to a symbol that is contiguous to the symbol which has been considered earlier. 2. The series of elementary operations that are in fact executed is determined by three factors: first, the subject’s mental state at a given time; second, the symbol which the subject considers at that time; third, a rule chosen from a finite number of alternatives.


1991 ◽  
Vol 02 (05) ◽  
pp. 551-562 ◽  
Author(s):  
RAPHAEL M. ROBINSON

Marvin L. Minsky constructed a 4-symbol 7-state universal Turing machine in 1962. It was first announced in a postscript to [2] and is also described in [3, Sec. 14.8]. This paper contains everything that is needed for an understanding of his machine, including a complete description of its operation. Minsky's machine remains one of the minimal known universal Turing machines. That is, there is no known such machine which decreases one parameter without increasing the other. However, Rogozhin [6], [7] has constructed seven universal machines with the following parameters: [Formula: see text] His 4-symbol 7-state machine is somewhat different from Minsky's, but all of his machines use a construction similar to that used by Minsky. The following corrections should be noted: First machine, for q 6 00Lq 1 read q 6 00Lq 7; second machine, for q 4 11Rq 4 read q 4 11Rq 10; last machine, for q 2 b 2 bLq 2 read [Formula: see text]. A generalized Turing machine with 4 symbols and 7 states, closely related to Minsky's, was constructed and used in [5].


Sign in / Sign up

Export Citation Format

Share Document