scholarly journals Turing Award lecture on computational complexity and the nature of computer science

1994 ◽  
Vol 37 (10) ◽  
pp. 37-43 ◽  
Author(s):  
Juris Hartmanis
2021 ◽  
Vol 64 (6) ◽  
pp. 120
Author(s):  
Leah Hoffmann

ACM A.M. Turing Award recipients Alfred Aho and Jeffrey Ullman discuss their early work, the 'Dragon Book,' and the future of 'live' computer science education.


Author(s):  
Maciej Liskiewicz ◽  
Ulrich Wölfel

This chapter provides an overview, based on current research, on theoretical aspects of digital steganography— a relatively new field of computer science that deals with hiding secret data in unsuspicious cover media. We focus on formal analysis of security of steganographic systems from a computational complexity point of view and provide models of secure systems that make realistic assumptions of limited computational resources of involved parties. This allows us to look at steganographic secrecy based on reasonable complexity assumptions similar to ones commonly accepted in modern cryptography. In this chapter we expand the analyses of stego-systems beyond security aspects, which practitioners find difficult to implement (if not impossible to realize), to the question why such systems are so difficult to implement and what makes these systems different from practically used ones.


2019 ◽  
Vol 27 (3) ◽  
pp. 381-439
Author(s):  
Walter Dean

Abstract Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ problem and why it has proven hard to resolve, and the role of non-classical modes of computation and proof.


2018 ◽  
pp. 94-109
Author(s):  
I. Petik

The paper centers on building the semantics of the modal metatheory for studying the classes of algorithmic complexity. Further the effectiveness of this calculus is studied on the example of researching the famous problem of computational complexity theory – the question of equality of the classes P and NP. The new theoretical and methodological approach to the problem is provided. The original semantics was developed that can be used for description of relations between classes of algorithmic complexity from the complexity theory. On the basis of this semantics the complete calculus of the logic of the computational complexity can be developed in future. It is the first time when modal logic is used for studying the relations between classes of algorithmic complexity. New theoretical and methodological approaches to the classical problems of the complexity theory are proposed. Paper matters for computer science, philosophy of mathematics, logic and theory of algorithms, cryptography.


1987 ◽  
Vol 52 (1) ◽  
pp. 1-43 ◽  
Author(s):  
Larry Stockmeyer

One of the more significant achievements of twentieth century mathematics, especially from the viewpoints of logic and computer science, was the work of Church, Gödel and Turing in the 1930's which provided a precise and robust definition of what it means for a problem to be computationally solvable, or decidable, and which showed that there are undecidable problems which arise naturally in logic and computer science. Indeed, when one is faced with a new computational problem, one of the first questions to be answered is whether the problem is decidable or undecidable. A problem is usually defined to be decidable if and only if it can be solved by some Turing machine, and the class of decidable problems defined in this way remains unchanged if “Turing machine” is replaced by any of a variety of other formal models of computation. The division of all problems into two classes, decidable or undecidable, is very coarse, and refinements have been made on both sides of the boundary. On the undecidable side, work in recursive function theory, using tools such as effective reducibility, has exposed much additional structure such as degrees of unsolvability. The main purpose of this survey article is to describe a branch of computational complexity theory which attempts to expose more structure within the decidable side of the boundary.Motivated in part by practical considerations, the additional structure is obtained by placing upper bounds on the amounts of computational resources which are needed to solve the problem. Two common measures of the computational resources used by an algorithm are time, the number of steps executed by the algorithm, and space, the amount of memory used by the algorithm.


AI Magazine ◽  
2020 ◽  
Vol 41 (1) ◽  
pp. 90-100
Author(s):  
Sven Koenig

Begin with the end in mind!1 PhD students in artificial intelligence can start to prepare for their career after their PhD degree immediately when joining graduate school, and probably in many more ways than they think. To help them with that, I asked current PhD students and recent PhD computer-science graduates from the University of Southern California and my own PhD students to recount the important lessons they learned (perhaps too late) and added the advice of Nobel Prize and Turing Award winners and many other researchers (including my own reflections), to create this article.


2021 ◽  
Vol 22 (1) ◽  
pp. 520-536
Author(s):  
Vladimir Nikolaevich Chubarikov ◽  
Nikolai Nikolaevich Dobrovol’skii ◽  
Irina Yurievna Rebrova ◽  
Nikolai Mihailovich Dobrovol’skii

2005 ◽  
Vol 95 (5) ◽  
pp. 1355-1368 ◽  
Author(s):  
Enriqueta Aragones ◽  
Itzhak Gilboa ◽  
Andrew Postlewaite ◽  
David Schmeidler

People may be surprised to notice certain regularities that hold in existing knowledge they have had for some time. That is, they may learn without getting new factual information. We argue that this can be partly explained by computational complexity. We show that, given a knowledge base, finding a small set of variables that obtain a certain value of R2 is computationally hard, in the sense that this term is used in computer science. We discuss some of the implications of this result and of fact-free learning in general.


Sign in / Sign up

Export Citation Format

Share Document