computability theory
Recently Published Documents


TOTAL DOCUMENTS

145
(FIVE YEARS 32)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
◽  
Michael McInerney

<p>This thesis establishes results in several different areas of computability theory.  The first chapter is concerned with algorithmic randomness. A well-known approach to the definition of a random infinite binary sequence is via effective betting strategies. A betting strategy is called integer-valued if it can bet only in integer amounts. We consider integer-valued random sets, which are infinite binary sequences such that no effective integer-valued betting strategy wins arbitrarily much money betting on the bits of the sequence. This is a notion that is much weaker than those normally considered in algorithmic randomness. It is sufficiently weak to allow interesting interactions with topics from classical computability theory, such as genericity and the computably enumerable degrees. We investigate the computational power of the integer-valued random sets in terms of standard notions from computability theory.  In the second chapter we extend the technique of forcing with bushy trees. We use this to construct an increasing ѡ-sequence 〈an〉 of Turing degrees which forms an initial segment of the Turing degrees, and such that each an₊₁ is diagonally noncomputable relative to an. This shows that the DNR₀ principle of reverse mathematics does not imply the existence of Turing incomparable degrees.   In the final chapter, we introduce a new notion of genericity which we call ѡ-change genericity. This lies in between the well-studied notions of 1- and 2-genericity. We give several results about the computational power required to compute these generics, as well as other results which compare and contrast their behaviour with that of 1-generics.</p>


2021 ◽  
Author(s):  
◽  
Michael McInerney

<p>This thesis establishes results in several different areas of computability theory.  The first chapter is concerned with algorithmic randomness. A well-known approach to the definition of a random infinite binary sequence is via effective betting strategies. A betting strategy is called integer-valued if it can bet only in integer amounts. We consider integer-valued random sets, which are infinite binary sequences such that no effective integer-valued betting strategy wins arbitrarily much money betting on the bits of the sequence. This is a notion that is much weaker than those normally considered in algorithmic randomness. It is sufficiently weak to allow interesting interactions with topics from classical computability theory, such as genericity and the computably enumerable degrees. We investigate the computational power of the integer-valued random sets in terms of standard notions from computability theory.  In the second chapter we extend the technique of forcing with bushy trees. We use this to construct an increasing ѡ-sequence 〈an〉 of Turing degrees which forms an initial segment of the Turing degrees, and such that each an₊₁ is diagonally noncomputable relative to an. This shows that the DNR₀ principle of reverse mathematics does not imply the existence of Turing incomparable degrees.   In the final chapter, we introduce a new notion of genericity which we call ѡ-change genericity. This lies in between the well-studied notions of 1- and 2-genericity. We give several results about the computational power required to compute these generics, as well as other results which compare and contrast their behaviour with that of 1-generics.</p>


2021 ◽  
Author(s):  
◽  
Adam Richard Day

<p>This thesis establishes significant new results in the area of algorithmic randomness. These results elucidate the deep relationship between randomness and computability. A number of results focus on randomness for finite strings. Levin introduced two functions which measure the randomness of finite strings. One function is derived from a universal monotone machine and the other function is derived from an optimal computably enumerable semimeasure. Gacs proved that infinitely often, the gap between these two functions exceeds the inverse Ackermann function (applied to string length). This thesis improves this result to show that infinitely often the difference between these two functions exceeds the double logarithm. Another separation result is proved for two different kinds of process machine. Information about the randomness of finite strings can be used as a computational resource. This information is contained in the overgraph. Muchnik and Positselsky asked whether there exists an optimal monotone machine whose overgraph is not truth-table complete. This question is answered in the negative. Related results are also established. This thesis makes advances in the theory of randomness for infinite binary sequences. A variant of process machines is used to characterise computable randomness, Schnorr randomness and weak randomness. This result is extended to give characterisations of these types of randomness using truthtable reducibility. The computable Lipschitz reducibility measures both the relative randomness and the relative computational power of real numbers. It is proved that the computable Lipschitz degrees of computably enumerable sets are not dense. Infinite binary sequences can be regarded as elements of Cantor space. Most research in randomness for Cantor space has been conducted using the uniform measure. However, the study of non-computable measures has led to interesting results. This thesis shows that the two approaches that have been used to define randomness on Cantor space for non-computable measures: that of Reimann and Slaman, along with the uniform test approach first introduced by Levin and also used by Gacs, Hoyrup and Rojas, are equivalent. Levin established the existence of probability measures for which all infinite sequences are random. These measures are termed neutral measures. It is shown that every PA degree computes a neutral measure. Work of Miller is used to show that the set of atoms of a neutral measure is a countable Scott set and in fact any countable Scott set is the set of atoms of some neutral measure. Neutral measures are used to prove new results in computability theory. For example, it is shown that the low computable enumerable sets are precisely the computably enumerable sets bounded by PA degrees strictly below the halting problem. This thesis applies ideas developed in the study of randomness to computability theory by examining indifferent sets for comeager classes in Cantor space. A number of results are proved. For example, it is shown that there exist 1-generic sets that can compute their own indifferent sets.</p>


2021 ◽  
Author(s):  
◽  
Adam Richard Day

<p>This thesis establishes significant new results in the area of algorithmic randomness. These results elucidate the deep relationship between randomness and computability. A number of results focus on randomness for finite strings. Levin introduced two functions which measure the randomness of finite strings. One function is derived from a universal monotone machine and the other function is derived from an optimal computably enumerable semimeasure. Gacs proved that infinitely often, the gap between these two functions exceeds the inverse Ackermann function (applied to string length). This thesis improves this result to show that infinitely often the difference between these two functions exceeds the double logarithm. Another separation result is proved for two different kinds of process machine. Information about the randomness of finite strings can be used as a computational resource. This information is contained in the overgraph. Muchnik and Positselsky asked whether there exists an optimal monotone machine whose overgraph is not truth-table complete. This question is answered in the negative. Related results are also established. This thesis makes advances in the theory of randomness for infinite binary sequences. A variant of process machines is used to characterise computable randomness, Schnorr randomness and weak randomness. This result is extended to give characterisations of these types of randomness using truthtable reducibility. The computable Lipschitz reducibility measures both the relative randomness and the relative computational power of real numbers. It is proved that the computable Lipschitz degrees of computably enumerable sets are not dense. Infinite binary sequences can be regarded as elements of Cantor space. Most research in randomness for Cantor space has been conducted using the uniform measure. However, the study of non-computable measures has led to interesting results. This thesis shows that the two approaches that have been used to define randomness on Cantor space for non-computable measures: that of Reimann and Slaman, along with the uniform test approach first introduced by Levin and also used by Gacs, Hoyrup and Rojas, are equivalent. Levin established the existence of probability measures for which all infinite sequences are random. These measures are termed neutral measures. It is shown that every PA degree computes a neutral measure. Work of Miller is used to show that the set of atoms of a neutral measure is a countable Scott set and in fact any countable Scott set is the set of atoms of some neutral measure. Neutral measures are used to prove new results in computability theory. For example, it is shown that the low computable enumerable sets are precisely the computably enumerable sets bounded by PA degrees strictly below the halting problem. This thesis applies ideas developed in the study of randomness to computability theory by examining indifferent sets for comeager classes in Cantor space. A number of results are proved. For example, it is shown that there exist 1-generic sets that can compute their own indifferent sets.</p>


10.53733/133 ◽  
2021 ◽  
Vol 52 ◽  
pp. 175-231
Author(s):  
Rod Downey ◽  
Noam Greenberg ◽  
Ellen Hammatt

A transfinite hierarchy of Turing degrees of c.e.\ sets has been used to calibrate the dynamics of families of constructions in computability theory, and yields natural definability results. We review the main results of the area, and discuss splittings of c.e.\ degrees, and finding maximal degrees in upper cones.


2021 ◽  
Vol Volume 17, Issue 3 ◽  
Author(s):  
Rod Downey ◽  
Alexander Melnikov ◽  
Keng Meng Ng

We introduce a framework for online structure theory. Our approach generalises notions arising independently in several areas of computability theory and complexity theory. We suggest a unifying approach using operators where we allow the input to be a countable object of an arbitrary complexity. We give a new framework which (i) ties online algorithms with computable analysis, (ii) shows how to use modifications of notions from computable analysis, such as Weihrauch reducibility, to analyse finite but uniform combinatorics, (iii) show how to finitize reverse mathematics to suggest a fine structure of finite analogs of infinite combinatorial problems, and (iv) see how similar ideas can be amalgamated from areas such as EX-learning, computable analysis, distributed computing and the like. One of the key ideas is that online algorithms can be viewed as a sub-area of computable analysis. Conversely, we also get an enrichment of computable analysis from classical online algorithms.


2021 ◽  
Vol 52 (2) ◽  
pp. 7-9
Author(s):  
Erick Galinkin

Computability theory forms the foundation for much of theoretical computer science. Many of our great unsolved questions stem from the need to understand what problems can even be solved. The greatest question of computer science, P vs. NP, even sidesteps this entirely, asking instead how efficiently we can find solutions for the problems that we know are solvable. For many students both at the undergraduate and graduate level, a first exposure to computability theory follows a standard sequence on data structures and algorithms and students often marvel at the first results they see on undecidability - how could we possibly prove that we can never solve a problem? This book, in contrast with other books that are often used as first exposures to computability, finite automata, Turing machines, and the like, focuses very specifically on the notion of what is computable and how computability theory, as a science unto itself, fits into the grander scheme. The book is appropriate for advanced undergraduates and beginning graduate students in computer science or mathematics who are interested in theoretical computer science. Robič sidesteps the standard theoretical computer science progression - understanding finite automata and pushdown automata before moving into Turing machines - by setting the stage with Hilbert's program and mathematical prerequisites before introducing the Turing machine absent the usual prerequisites, and then introducing advanced topics often absent in introductory texts. Most chapters are relatively short and contain problem sets, making it appropriate for both a classroom text or for self-study.


2021 ◽  
Vol 70 ◽  
pp. 65-76
Author(s):  
Manuel Alfonseca ◽  
Manuel Cebrian ◽  
Antonio Fernandez Anta ◽  
Lorenzo Coviello ◽  
Andrés Abeliuk ◽  
...  

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible. This article is part of the special track on AI and Society.


Computability ◽  
2020 ◽  
pp. 1-18
Author(s):  
Edgar G. Daylight

The term ‘Halting Problem’ arguably refers to computer science’s most celebrated impossibility result and to the core notion underlying the language-theoretic approach to security. Computer professionals often ignore the Halting Problem however. In retrospect, this is not too surprising given that several advocates of computability theory implicitly follow Christopher Strachey’s alleged 1965 proof of his Halting Problem (which is about executable – i.e., hackable – programs) rather than Martin Davis’s correct 1958 version or his 1994 account (each of which is solely about mathematical objects). For the sake of conceptual clarity, particularly for researchers pursuing a coherent science of cybersecurity, I will scrutinize Strachey’s 1965 line of reasoning – which is widespread today – both from a charitable, historical angle and from a critical, engineering perspective.


Sign in / Sign up

Export Citation Format

Share Document