On computational complexity and the nature of computer science

1995 ◽  
Vol 27 (1) ◽  
pp. 7-16 ◽  
Author(s):  
Juris Hartmanis
Author(s):  
Maciej Liskiewicz ◽  
Ulrich Wölfel

This chapter provides an overview, based on current research, on theoretical aspects of digital steganography— a relatively new field of computer science that deals with hiding secret data in unsuspicious cover media. We focus on formal analysis of security of steganographic systems from a computational complexity point of view and provide models of secure systems that make realistic assumptions of limited computational resources of involved parties. This allows us to look at steganographic secrecy based on reasonable complexity assumptions similar to ones commonly accepted in modern cryptography. In this chapter we expand the analyses of stego-systems beyond security aspects, which practitioners find difficult to implement (if not impossible to realize), to the question why such systems are so difficult to implement and what makes these systems different from practically used ones.


2019 ◽  
Vol 27 (3) ◽  
pp. 381-439
Author(s):  
Walter Dean

Abstract Computational complexity theory is a subfield of computer science originating in computability theory and the study of algorithms for solving practical mathematical problems. Amongst its aims is classifying problems by their degree of difficulty — i.e., how hard they are to solve computationally. This paper highlights the significance of complexity theory relative to questions traditionally asked by philosophers of mathematics while also attempting to isolate some new ones — e.g., about the notion of feasibility in mathematics, the $\mathbf{P} \neq \mathbf{NP}$ problem and why it has proven hard to resolve, and the role of non-classical modes of computation and proof.


2018 ◽  
pp. 94-109
Author(s):  
I. Petik

The paper centers on building the semantics of the modal metatheory for studying the classes of algorithmic complexity. Further the effectiveness of this calculus is studied on the example of researching the famous problem of computational complexity theory – the question of equality of the classes P and NP. The new theoretical and methodological approach to the problem is provided. The original semantics was developed that can be used for description of relations between classes of algorithmic complexity from the complexity theory. On the basis of this semantics the complete calculus of the logic of the computational complexity can be developed in future. It is the first time when modal logic is used for studying the relations between classes of algorithmic complexity. New theoretical and methodological approaches to the classical problems of the complexity theory are proposed. Paper matters for computer science, philosophy of mathematics, logic and theory of algorithms, cryptography.


1987 ◽  
Vol 52 (1) ◽  
pp. 1-43 ◽  
Author(s):  
Larry Stockmeyer

One of the more significant achievements of twentieth century mathematics, especially from the viewpoints of logic and computer science, was the work of Church, Gödel and Turing in the 1930's which provided a precise and robust definition of what it means for a problem to be computationally solvable, or decidable, and which showed that there are undecidable problems which arise naturally in logic and computer science. Indeed, when one is faced with a new computational problem, one of the first questions to be answered is whether the problem is decidable or undecidable. A problem is usually defined to be decidable if and only if it can be solved by some Turing machine, and the class of decidable problems defined in this way remains unchanged if “Turing machine” is replaced by any of a variety of other formal models of computation. The division of all problems into two classes, decidable or undecidable, is very coarse, and refinements have been made on both sides of the boundary. On the undecidable side, work in recursive function theory, using tools such as effective reducibility, has exposed much additional structure such as degrees of unsolvability. The main purpose of this survey article is to describe a branch of computational complexity theory which attempts to expose more structure within the decidable side of the boundary.Motivated in part by practical considerations, the additional structure is obtained by placing upper bounds on the amounts of computational resources which are needed to solve the problem. Two common measures of the computational resources used by an algorithm are time, the number of steps executed by the algorithm, and space, the amount of memory used by the algorithm.


2021 ◽  
Vol 22 (1) ◽  
pp. 520-536
Author(s):  
Vladimir Nikolaevich Chubarikov ◽  
Nikolai Nikolaevich Dobrovol’skii ◽  
Irina Yurievna Rebrova ◽  
Nikolai Mihailovich Dobrovol’skii

2005 ◽  
Vol 95 (5) ◽  
pp. 1355-1368 ◽  
Author(s):  
Enriqueta Aragones ◽  
Itzhak Gilboa ◽  
Andrew Postlewaite ◽  
David Schmeidler

People may be surprised to notice certain regularities that hold in existing knowledge they have had for some time. That is, they may learn without getting new factual information. We argue that this can be partly explained by computational complexity. We show that, given a knowledge base, finding a small set of variables that obtain a certain value of R2 is computationally hard, in the sense that this term is used in computer science. We discuss some of the implications of this result and of fact-free learning in general.


Author(s):  
Olivier Bournez ◽  
Gilles Dowek ◽  
Rémi Gilleron ◽  
Serge Grigorieff ◽  
Jean-Yves Marion ◽  
...  

Author(s):  
Allon G. Percus ◽  
Gabriel Istrate

Computer science and physics have been closely linked since the birth of modern computing. This book is about that link. John von Neumann’s original design for digital computing in the 1940s was motivated by applications in ballistics and hydrodynamics, and his model still underlies today’s hardware architectures. Within several years of the invention of the first digital computers, the Monte Carlo method was developed, putting these devices to work simulating natural processes using the principles of statistical physics. It is difficult to imagine how computing might have evolved without the physical insights that nurtured it. It is impossible to imagine how physics would have evolved without computation. While digital computers quickly became indispensable, a true theoretical understanding of the efficiency of the computation process did not occur until twenty years later. In 1965, Hartmanis and Stearns [227] as well as Edmonds [139, 140] articulated the notion of computational complexity, categorizing algorithms according to how rapidly their time and space requirements grow with input size. The qualitative distinctions that computational complexity draws between algorithms form the foundation of theoretical computer science. Chief among these distinctions is that of polynomial versus exponential time. A combinatorial problem belongs in the complexity class P (polynomial time) if there exists an algorithm guaranteeing a solution in a computation time, or number of elementary steps of the algorithm, that grows at most polynomially with input size. Loosely speaking, such problems are considered computationally feasible. An example might be sorting a list of n numbers: even a particularly naive and inefficient algorithm for this will run in a number of steps that grows as O(n2), and so sorting is in the class P. A problem belongs in the complexity class NP (non-deterministic polynomial time) if it is merely possible to test, in polynomial time, whether a specific presumed solution is correct. Of course, P ⊆ NP: for any problem whose solution can be found in polynomial time, one can surely verify the validity of a presumed solution in polynomial time.


Sign in / Sign up

Export Citation Format

Share Document