complete problems
Recently Published Documents


TOTAL DOCUMENTS

408
(FIVE YEARS 51)

H-INDEX

32
(FIVE YEARS 3)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-29
Author(s):  
Zi Wang ◽  
Aws Albarghouthi ◽  
Gautam Prakriya ◽  
Somesh Jha

To verify safety and robustness of neural networks, researchers have successfully applied abstract interpretation , primarily using the interval abstract domain. In this paper, we study the theoretical power and limits of the interval domain for neural-network verification. First, we introduce the interval universal approximation (IUA) theorem. IUA shows that neural networks not only can approximate any continuous function f (universal approximation) as we have known for decades, but we can find a neural network, using any well-behaved activation function, whose interval bounds are an arbitrarily close approximation of the set semantics of f (the result of applying f to a set of inputs). We call this notion of approximation interval approximation . Our theorem generalizes the recent result of Baader et al. from ReLUs to a rich class of activation functions that we call squashable functions . Additionally, the IUA theorem implies that we can always construct provably robust neural networks under ℓ ∞ -norm using almost any practical activation function. Second, we study the computational complexity of constructing neural networks that are amenable to precise interval analysis. This is a crucial question, as our constructive proof of IUA is exponential in the size of the approximation domain. We boil this question down to the problem of approximating the range of a neural network with squashable activation functions. We show that the range approximation problem (RA) is a Δ 2 -intermediate problem, which is strictly harder than NP -complete problems, assuming coNP ⊄ NP . As a result, IUA is an inherently hard problem : No matter what abstract domain or computational tools we consider to achieve interval approximation, there is no efficient construction of such a universal approximator. This implies that it is hard to construct a provably robust network, even if we have a robust network to start with.


2021 ◽  
Author(s):  
Pooja Chaturvedi ◽  
Ajai Kumar Daniel ◽  
Vipul Narayan

Abstract Mathematical programming techniques are widely used in the determination of optimal functional configuration of a wireless sensor network (WSN). But these techniques have usually high computational complexity and are often considered as Non Polynomial (NP) complete problems. Therefore, machine learning (ML) techniques can be utilized for the prediction of the WSN parameters with high accuracy and lesser computational complexity than the mathematical programming techniques. This paper focuses on developing the prediction model for determination of the node status to be included in the set cover based on the coverage probability and trust values of the nodes. The set covers are defined as the subset of nodes which are scheduled to monitor the region of interest with the desired coverage level. Several machine learning techniques have been used to determine the node activation status based on which the set covers are obtained. The results show that the random forest based prediction model yields the highest accuracy for the considered network setting.


Author(s):  
Ayyappasamy Sudalaiyadum Perumal ◽  
Zihao Wang ◽  
Falco C M J M van Delft ◽  
Giulia Ippoliti ◽  
Lila Kari ◽  
...  

Abstract All known algorithms to solve Nondeterministic Polynomial (NP) Complete problems, relevant to many real-life applications, require the exploration of a space of potential solutions, which grows exponentially with the size of the problem. Since electronic computers can implement only limited parallelism, their use for solving NP-complete problems is impractical for very large instances, and consequently alternative massively parallel computing approaches were proposed to address this challenge. We present a scaling analysis of two such alternative computing approaches, DNA Computing (DNA-C) and Network Biocomputing with Agents (NB-C), compared with Electronic Computing (E-C). The Subset Sum Problem (SSP), a known NP-complete problem, was used as a computational benchmark, to compare the volume, the computing time, and the energy required for each type of computation, relative to the input size. Our analysis shows that the sequentiality of E-C translates in a very small volume compared to that required by DNA-C and NB-C, at the cost of the E-C computing time being outperformed first by DNA-C (linear run time), followed by NB-C. Finally, NB-C appears to be more energy-efficient than DNA-C for some types of input sets, while being less energy-efficient for others, with E-C being always an order of magnitude less energy efficient than DNA-C. This scaling study suggest that presently none of these computing approaches win, even theoretically, for all three key performance criteria, and that all require breakthroughs to overcome their limitations, with potential solutions including hybrid computing approaches.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 573
Author(s):  
Davide Orsucci ◽  
Vedran Dunjko

Quantum algorithms for solving the Quantum Linear System (QLS) problem are among the most investigated quantum algorithms of recent times, with potential applications including the solution of computationally intractable differential equations and speed-ups in machine learning. A fundamental parameter governing the efficiency of QLS solvers is κ, the condition number of the coefficient matrix A, as it has been known since the inception of the QLS problem that for worst-case instances the runtime scales at least linearly in κ [Harrow, Hassidim and Lloyd, PRL 103, 150502 (2009)]. However, for the case of positive-definite matrices classical algorithms can solve linear systems with a runtime scaling as κ, a quadratic improvement compared to the the indefinite case. It is then natural to ask whether QLS solvers may hold an analogous improvement. In this work we answer the question in the negative, showing that solving a QLS entails a runtime linear in κ also when A is positive definite. We then identify broad classes of positive-definite QLS where this lower bound can be circumvented and present two new quantum algorithms featuring a quadratic speed-up in κ: the first is based on efficiently implementing a matrix-block-encoding of A−1, the second constructs a decomposition of the form A=LL† to precondition the system. These methods are widely applicable and both allow to efficiently solve BQP-complete problems.


2021 ◽  
Author(s):  
Sébastien Plutniak

Community detection is a major issue in network analysis. This paper combines a socio-historical approach with an experimental reconstruction of programs to investigate the early automation of clique detection algorithms, which remains one of the unsolved NP-complete problems today. The research led by the archaeologist Jean-Claude Gardin from the 1950s on non-numerical information and graph analysis is retraced to demonstrate the early contributions of social sciences and humanities. The limited recognition and reception of Gardin's innovative computer application to the humanities are addressed through two factors, in addition to the effects of historiography and bibliographies on the recording, discoverability, and reuse of scientific productions: (1) funding policies, evidenced by the transfer of research effort on graph applications from temporary interdisciplinary spaces to disciplinary organizations related to the then-emerging field of computer science; and (2) the erratic careers of algorithms, in which efficiency, flaws, corrections, and authors’ status, were determining factors.


2021 ◽  
Vol 21 (15&16) ◽  
pp. 1296-1306
Author(s):  
Seyed Mousavi

Our computers today, from sophisticated servers to small smartphones, operate based on the same computing model, which requires running a sequence of discrete instructions, specified as an algorithm. This sequential computing paradigm has not yet led to a fast algorithm for an NP-complete problem despite numerous attempts over the past half a century. Unfortunately, even after the introduction of quantum mechanics to the world of computing, we still followed a similar sequential paradigm, which has not yet helped us obtain such an algorithm either. Here a completely different model of computing is proposed to replace the sequential paradigm of algorithms with inherent parallelism of physical processes. Using the proposed model, instead of writing algorithms to solve NP-complete problems, we construct physical systems whose equilibrium states correspond to the desired solutions and let them evolve to search for the solutions. The main requirements of the model are identified and quantum circuits are proposed for its potential implementation.


2021 ◽  
Vol 1208 (1) ◽  
pp. 012032
Author(s):  
Fatka Kulenović ◽  
Azra Hošić

Abstract The Travelling Salesman Problem is categorized as NP-complete problems called combinatorial optimization problems. For the growing number of cities it is unsolvable with the use of exact methods in a reasonable time. Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions, however they give good approximation usually in time. Studies have shown that the proposed genetic algorithm can find a shorter route in real time, compared with the existing manipulator model of path selection. The genetic algorithm depends on the selection criteria, crosses, and mutation operators described in detail in this paper. Possible settings of the genetic algorithm are listed and described, as well as the influence of mutation and crossing operators on the efficiency of the genetic algorithm. The optimization results are presented graphically in the MATLAB software package for different cases, after which a comparison of the efficiency of the genetic algorithm with respect to the given parameters is performed.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2764
Author(s):  
Rasul Kochkarov

NP-complete problems in graphs, such as enumeration and the selection of subgraphs with given characteristics, become especially relevant for large graphs and networks. Herein, particular statements with constraints are proposed to solve such problems, and subclasses of graphs are distinguished. We propose a class of prefractal graphs and review particular statements of NP-complete problems. As an example, algorithms for searching for spanning trees and packing bipartite graphs are proposed. The developed algorithms are polynomial and based on well-known algorithms and are used in the form of procedures. We propose to use the class of prefractal graphs as a tool for studying NP-complete problems and identifying conditions for their solvability. Using prefractal graphs for the modeling of large graphs and networks, it is possible to obtain approximate solutions, and some exact solutions, for problems on natural objects—social networks, transport networks, etc.


2021 ◽  
pp. 13-29
Author(s):  
Hiro Ito

AbstractConstant-time algorithms are powerful tools, since they run by reading only a constant-sized part of each input. Property testing is the most popular research framework for constant-time algorithms. In property testing, an algorithm determines whether a given instance satisfies some predetermined property or is far from satisfying the property with high probability by reading a constant-sized part of the input. A property is said to be testable if there is a constant-time testing algorithm for the property. This chapter covers property testing on graphs and games. The fields of graph algorithms and property testing are two of the main streams of research on discrete algorithms and computational complexity. In the section on graphs in this chapter, we present some important results, particularly on the characterization of testable graph properties. At the end of the section, we show results that we published in 2020 on a complete characterization (necessary and sufficient condition) of testable monotone or hereditary properties in the bounded-degree digraphs. In the section on games, we present results that we published in 2019 showing that the generalized chess, Shogi (Japanese chess), and Xiangqi (Chinese chess) are all testable. We believe that this is the first results for testable EXPTIME-complete problems.


Sign in / Sign up

Export Citation Format

Share Document