scholarly journals Computational complexity with experiments as oracles

Author(s):  
Edwin Beggs ◽  
José Félix Costa ◽  
Bruno Loff ◽  
John V Tucker

We discuss combining physical experiments with machine computations and introduce a form of analogue–digital (AD) Turing machine. We examine in detail a case study where an experimental procedure based on Newtonian kinematics is combined with a class of Turing machines. Three forms of AD machine are studied, in which physical parameters can be set exactly and approximately. Using non-uniform complexity theory, and some probability, we prove theorems that show that these machines can compute more than classical Turing machines.

Author(s):  
Edwin Beggs ◽  
José Félix Costa ◽  
Bruno Loff ◽  
J.V. Tucker

Earlier, to explore the idea of combining physical experiments with algorithms, we introduced a new form of analogue–digital (AD) Turing machine. We examined in detail a case study where an experimental procedure, based on Newtonian kinematics, is used as an oracle with classes of Turing machines. The physical cost of oracle calls was counted and three forms of AD queries were studied, in which physical parameters can be set exactly and approximately. Here, in this sequel, we complete the classification of the computational power of these AD Turing machines and determine precisely what they can compute, using non-uniform complexity classes and probabilities.


2001 ◽  
Vol 11 (02n03) ◽  
pp. 353-361 ◽  
Author(s):  
STEFAN D. BRUDA ◽  
SELIM G. AKL

We assume the multitape real-time Turing machine as a formal model for parallel real-time computation. Then, we show that, for any positive integer k, there is at least one language Lk which is accepted by a k-tape real-Turing machine, but cannot be accepted by a (k - 1)-tape real-time Turing machine. It follows therefore that the languages accepted by real-time Turing machines form an infinite hierarchy with respect to the number of tapes used. Although this result was previously obtained elsewhere, our proof is considerably shorter, and explicitly builds the languages Lk. The ability of the real-time Turing machine to model practical real-time and/or parallel computations is open to debate. Nevertheless, our result shows how a complexity theory based on a formal model can draw interesting results that are of more general nature than those derived from examples. Thus, we hope to offer a motivation for looking into realistic parallel real-time models of computation.


2010 ◽  
Vol 20 (6) ◽  
pp. 1019-1050 ◽  
Author(s):  
EDWIN J. BEGGS ◽  
JOSÉ FÉLIX COSTA ◽  
JOHN V. TUCKER

We pose the following question: If a physical experiment were to be completely controlled by an algorithm, what effect would the algorithm have on the physical measurements made possible by the experiment?In a programme to study the nature of computation possible by physical systems, and by algorithms coupled with physical systems, we have begun to analyse: (i)the algorithmic nature of experimental procedures; and(ii)the idea of using a physical experiment as an oracle to Turing Machines. To answer the question, we will extend our theory of experimental oracles so that we can use Turing machines to model the experimental procedures that govern the conduct of physical experiments. First, we specify an experiment that measures mass via collisions in Newtonian dynamics and examine its properties in preparation for its use as an oracle. We begin the classification of the computational power of polynomial time Turing machines with this experimental oracle using non-uniform complexity classes. Second, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on what can be measured using equipment. Indeed, the theorems suggest a new form of uncertainty principle for our knowledge of physical quantities measured in simple physical experiments. We argue that the results established here are representative of a huge class of experiments.


Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 304
Author(s):  
Florin Manea

In this paper we propose and analyse from the computational complexity point of view several new variants of nondeterministic Turing machines. In the first such variant, a machine accepts a given input word if and only if one of its shortest possible computations on that word is accepting; on the other hand, the machine rejects the input word when all the shortest computations performed by the machine on that word are rejecting. We are able to show that the class of languages decided in polynomial time by such machines is PNP[log]. When we consider machines that decide a word according to the decision taken by the lexicographically first shortest computation, we obtain a new characterization of PNP. A series of other ways of deciding a language with respect to the shortest computations of a Turing machine are also discussed.


2015 ◽  
Vol 23 (3) ◽  
pp. 205-213
Author(s):  
Hiroyuki Okazaki ◽  
Yuichi Futa

Abstract In this article, we formalize polynomially bounded sequences that plays an important role in computational complexity theory. Class P is a fundamental computational complexity class that contains all polynomial-time decision problems [11], [12]. It takes polynomially bounded amount of computation time to solve polynomial-time decision problems by the deterministic Turing machine. Moreover we formalize polynomial sequences [5].


2021 ◽  
Vol 7 (4) ◽  
pp. 64
Author(s):  
Tanguy Ophoff ◽  
Cédric Gullentops ◽  
Kristof Van Beeck ◽  
Toon Goedemé

Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be detected, less intra-class variance, less lighting and background variance, constrained or even fixed camera viewpoints, etc. In these cases, we hypothesize that smaller networks could be used without deteriorating the accuracy. However, there are multiple reasons why this does not happen in practice. Firstly, overparameterized networks tend to learn better, and secondly, transfer learning is usually used to reduce the necessary amount of training data. In this paper, we investigate how much we can reduce the computational complexity of a standard object detection network in such constrained object detection problems. As a case study, we focus on a well-known single-shot object detector, YoloV2, and combine three different techniques to reduce the computational complexity of the model without reducing its accuracy on our target dataset. To investigate the influence of the problem complexity, we compare two datasets: a prototypical academic (Pascal VOC) and a real-life operational (LWIR person detection) dataset. The three optimization steps we exploited are: swapping all the convolutions for depth-wise separable convolutions, perform pruning and use weight quantization. The results of our case study indeed substantiate our hypothesis that the more constrained a problem is, the more the network can be optimized. On the constrained operational dataset, combining these optimization techniques allowed us to reduce the computational complexity with a factor of 349, as compared to only a factor 9.8 on the academic dataset. When running a benchmark on an Nvidia Jetson AGX Xavier, our fastest model runs more than 15 times faster than the original YoloV2 model, whilst increasing the accuracy by 5% Average Precision (AP).


4OR ◽  
2021 ◽  
Author(s):  
Gerhard J. Woeginger

AbstractWe survey optimization problems that allow natural simple formulations with one existential and one universal quantifier. We summarize the theoretical background from computational complexity theory, and we present a multitude of illustrating examples. We discuss the connections to robust optimization and to bilevel optimization, and we explain the reasons why the operational research community should be interested in the theoretical aspects of this area.


1996 ◽  
Vol 27 (4) ◽  
pp. 3-7
Author(s):  
E. Allender ◽  
J. Feigenbaum ◽  
J. Goldsmith ◽  
T. Pitassi ◽  
S. Rudich

2018 ◽  
Vol 196 ◽  
pp. 04051
Author(s):  
Agnes Iringová

The current state of waste production and management in Slovakia. Legislative regulations. Analysis of applying recycled waste products in the construction of sustainable buildings as a substitution of non-renewable materials. The comparison of the physical parameters of recycled materials with non-renewable materials in terms of thermal and fire protection. The construction solution of lightweight building envelopes with a timber supporting system using the thermal insulation and facing made of recycled materials. The model solution of a wood-based family house using recycled waste materials. The comparison of the environmental burden of a standard lightweight sandwich peripheral wall with a recycled waste wall.


Sign in / Sign up

Export Citation Format

Share Document