universal turing machine
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 9)

H-INDEX

9
(FIVE YEARS 1)

Author(s):  
Songsong Dai

In this paper, we give a definition for quantum information distance. In the classical setting, information distance between two classical strings is developed based on classical Kolmogorov complexity. It is defined as the length of a shortest transition program between these two strings in a universal Turing machine. We define the quantum information distance based on Berthiaume et al.’s quantum Kolmogorov complexity. The quantum information distance between qubit strings is defined as the length of the shortest quantum transition program between these two qubit strings in a universal quantum Turing machine. We show that our definition of quantum information distance is invariant under the choice of the underlying quantum Turing machine.


2021 ◽  
pp. 026327642110485
Author(s):  
Luciana Parisi

What is algorithmic thought? It is not possible to address this question without first reflecting on how the Universal Turing Machine transformed symbolic logic and brought to a halt the universality of mathematical formalism and the biocentric speciation of thought. The article draws on Sylvia Wynter’s discussion of the sociogenic principle to argue that both neurocognitive and formal models of automated cognition constitute the epistemological explanations of the origin of the human and of human sapience. Wynter’s argument will be related to Gilbert Simondon’s reflections on ‘technical mentality’ to consider how socio-techno-genic assemblages can challenge the biocentricism and the formalism of modern epistemology. This article turns to ludic logic as one possible example of techno-semiotic languages as a speculative overturning of sociogenic programming. Algorithmic rules become technique-signs coinciding not with classic formalism but with interactive localities without re-originating the universality of colonial and patriarchal cosmogony.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Naoto Shiraishi ◽  
Keiji Matsumoto

AbstractThe investigation of thermalization in isolated quantum many-body systems has a long history, dating back to the time of developing statistical mechanics. Most quantum many-body systems in nature are considered to thermalize, while some never achieve thermal equilibrium. The central problem is to clarify whether a given system thermalizes, which has been addressed previously, but not resolved. Here, we show that this problem is undecidable. The resulting undecidability even applies when the system is restricted to one-dimensional shift-invariant systems with nearest-neighbour interaction, and the initial state is a fixed product state. We construct a family of Hamiltonians encoding dynamics of a reversible universal Turing machine, where the fate of a relaxation process changes considerably depending on whether the Turing machine halts. Our result indicates that there is no general theorem, algorithm, or systematic procedure determining the presence or absence of thermalization in any given Hamiltonian.


2021 ◽  
Author(s):  
Naoto Shiraishi ◽  
Keiji Matsumoto

Abstract The investigation of thermalization in isolated quantum many-body systems has a long history, dating back to the time of developing statistical mechanics. Most quantum many-body systems in nature are considered to thermalize, while some never achieve thermal equilibrium. The central problem is to clarify whether a given system thermalizes, which has been addressed previously, but not resolved. Here, we show that this problem is undecidable. The resulting undecidability even applies when the system is restricted to one-dimensional shift-invariant systems with nearest-neighbour interaction, and the initial state is a fixed product state. We construct a family of Hamiltonians encoding dynamics of a reversible universal Turing machine, where the fate of a relaxation process changes considerably depending on whether the Turing machine halts. Our result indicates that there is no general theorem, algorithm, or systematic procedure determining the presence or absence of thermalization in any given Hamiltonian.


2021 ◽  
Vol 70 ◽  
pp. 65-76
Author(s):  
Manuel Alfonseca ◽  
Manuel Cebrian ◽  
Antonio Fernandez Anta ◽  
Lorenzo Coviello ◽  
Andrés Abeliuk ◽  
...  

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible. This article is part of the special track on AI and Society.


2020 ◽  
Vol 63 (1) ◽  
pp. 69-86
Author(s):  
Paweł Stacewicz

AbstractAnalogicity in computer science is understood in two, not mutually exclusive ways: 1) with regard to the continuity feature (of data or computations), 2) with regard to the analogousness feature (i.e. similarity between certain natural processes and computations). Continuous computations are the subject of three methodological questions considered in the paper: 1a) to what extent do their theoretical models go beyond the model of the universal Turing machine (defining digital computations), 1b) is their computational power greater than that of the universal Turing machine, 1c) under what conditions are continuous computations realizable in practice? The analogue-analogical computations lead to two other issues: 2a) in what sense and to what extent their accuracy depends on the adequacy of certain theories of empirical sciences, 2b) are there analogue-analogical computations in nature that are also continuous? The above issues are an important element of the philosophical discussion on the limitations of contemporary computer science.


2020 ◽  
Vol 17 (04) ◽  
pp. 2050016
Author(s):  
Juyang Weng

The universal Turing Machine (TM) is a model for Von Neumann computers — general-purpose computers. A human brain, linked with its biological body, can inside-skull-autonomously learn a universal TM so that he acts as a general-purpose computer and writes a computer program for any practical purposes. It is unknown whether a robot can accomplish the same. This theoretical work shows how the Developmental Network (DN), linked with its robot body, can accomplish this. Unlike a traditional TM, the TM learned by DN is a super TM — Grounded, Emergent, Natural, Incremental, Skulled, Attentive, Motivated, and Abstractive (GENISAMA). A DN is free of any central controller (e.g., Master Map, convolution, or error back-propagation). Its learning from a teacher TM is one transition observation at a time, immediate, and error-free until all its neurons have been initialized by early observed teacher transitions. From that point on, the DN is no longer error-free but is always optimal at every time instance in the sense of maximal likelihood, conditioned on its limited computational resources and the learning experience. This paper extends the Church–Turing thesis to a stronger version — a GENISAMA TM is capable of Autonomous Programming for General Purposes (APFGP) — and proves both the Church–Turing thesis and its stronger version.


Most interactions between users and augmented reality system (ARS) are that user assigns a marker to ARS, and the ARS responds the marker. In this context, a marker is mapped to an ARS's response, or in general, an array of markers is mapped to an array of ARS's responses. This interaction is a constant or linear complexity interaction since there is only a bijective mapping between a set of markers and a set of ARS's responses. In this research, we propose the expansion of user - ARS complexity into the polynomial. It is an interaction in which not only one marker for a single response (or an array of markers for an array of ARS's responses), but the interaction by which user provides a string of markers as a word of markers (i.e., a combination of multiple markers as a word) for a single ARS's response. The set of strings of markers to the ARS provided by users built a regular language. So that, the complexity of the user-ARS interaction became polynomial. This interaction was implemented by stating the user's language by means of a generalization of finite state automata (gFSA) and placing a universal Turing machine (UTM) between user and ARS, where the UTM as an interpreter translating or mapping the user language to ARS. To summarize our research, overall we apply the idea of a formal language into the interaction between the user and ARS, thereby changing the complexity of the interaction to polynomial even expandable to nondeterministic polynomials.


2019 ◽  
Vol 769 ◽  
pp. 43-62 ◽  
Author(s):  
Shlomi Dolev ◽  
Juan A. Garay ◽  
Niv Gilboa ◽  
Vladimir Kolesnikov ◽  
Muni Venkateswarlu Kumaramangalam

Author(s):  
Vlatko Vedral

In Chapter 9 we discussed the idea of a universal Turing machine. This machine is capable of simulating any other machine given sufficient time and energy. For example, we discussed how your fridge microprocessor could be programmed to run Microsoft Windows, then we described Moore’s logic, that computers are becoming faster and smaller. Therefore, one day, a single atom may be able to simulate fully what a present day PC can do. This leads us to the fascinating possibility that every little constituent of our Universe may be able to simulate any other, given enough time and energy. The Universe therefore consists of a great number of little universal quantum computers. But this surely makes the Universe itself the largest quantum computer. So how powerful is our largest quantum computer? How many bits, how many computational steps? What is the total amount of information that the computer can hold? Since our view is that everything in reality is composed of information, it would be useful to know how much information there is in total and whether this total amount is growing or shrinking. The Second Law already tells us that the physical entropy in the Universe is always increasing. Since physical entropy has the same form as Shannon’s information, the Second Law also tells us that the information content of the Universe can only ever increase too. But what does this mean for us? If we consider our objective to be a full understanding of the Universe then we have to accept that the finish line is always moving further and further away from us. We define our reality through the laws and principles that we establish from the information that we gather. Quantum mechanics, for example, gives us a very different reality to what classical mechanics told us. In the Stone Age, the caveman’s perception of reality and what was possible was also markedly different from what Newton would have understood. In this way we process information from the Universe to create our reality. We can think of the Universe as a large balloon, within which there is a smaller balloon, our reality.


Sign in / Sign up

Export Citation Format

Share Document