polynomial algorithms
Recently Published Documents


TOTAL DOCUMENTS

173
(FIVE YEARS 24)

H-INDEX

24
(FIVE YEARS 1)

Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 113
Author(s):  
Sergey Goncharov ◽  
Andrey Nechesov

The problems associated with the construction of polynomial complexity computer programs require new techniques and approaches from mathematicians. One of such approaches is representing some class of polynomial algorithms as a certain class of special logical programs. Goncharov and Sviridenko described a logical programming language L0, where programs inductively are obtained from the set of Δ0-formulas using special terms. In their work, a new idea has been proposed to look at the term as a program. The computational complexity of such programs is polynomial. In the same years, a number of other logical languages with similar properties were created. However, the following question remained: can all polynomial algorithms be described in these languages? It is a long-standing problem, and the method of describing some polynomial algorithm in a not Turing complete logical programming language was not previously clear. In this paper, special types of terms and formulas have been found and added to solve this problem. One of the main contributions is the construction of p-iterative terms that simulate the work of the Turing machine. Using p-iterative terms, the work showed that class P is equal to class L, which extends the programming language L0 with p-iterative terms. Thus, it is shown that L is quite expressive and has no halting problem, which occurs in high-level programming languages. For these reasons, the logical language L can be used to create fast and reliable programs. The main limitation of the language L is that the implementation of algorithms of complexity is not higher than polynomial.


Author(s):  
Rohan Ghuge ◽  
Viswanath Nagarajan

We consider the following general network design problem. The input is an asymmetric metric (V, c), root [Formula: see text], monotone submodular function [Formula: see text], and budget B. The goal is to find an r-rooted arborescence T of cost at most B that maximizes f(T). Our main result is a simple quasi-polynomial time [Formula: see text]-approximation algorithm for this problem, in which [Formula: see text] is the number of vertices in an optimal solution. As a consequence, we obtain an [Formula: see text]-approximation algorithm for directed (polymatroid) Steiner tree in quasi-polynomial time. We also extend our main result to a setting with additional length bounds at vertices, which leads to improved [Formula: see text]-approximation algorithms for the single-source buy-at-bulk and priority Steiner tree problems. For the usual directed Steiner tree problem, our result matches the best previous approximation ratio but improves significantly on the running time. For polymatroid Steiner tree and single-source buy-at-bulk, our result improves prior approximation ratios by a logarithmic factor. For directed priority Steiner tree, our result seems to be the first nontrivial approximation ratio. Under certain complexity assumptions, our approximation ratios are the best possible (up to constant factors).


Author(s):  
Abderrahim SAHLI ◽  
Jacques Carlier ◽  
Aziz MOUKRIM

This paper deals with the Extended Resource Constrained Project Scheduling Problem (ERCPSP) which is defined by events, nonrenewable resources and precedence constraints between pairs of events. The availability of a resource is depleted and replenished at the occurrence times of a set of events. The decision problem of ERCPSP consists of determining whether an instance has a feasible schedule or not. When there is only one nonrenewable resource, this problem is equivalent to find a feasible schedule that minimizes the number of resource units initially required. It generalizes the maximum cumulative cost problem and the two-machine maximum completion time flow-shop problem. In this paper, we consider this problem with some specific precedence constraints: parallel chains, series-parallel and interval order precedence constraints. For the  first two cases, polynomial algorithms based on a linear decomposition of chains are proposed. For the third case, a polynomial  algorithm is introduced to solve it. The priority between events is defined using the properties of interval orders.


Author(s):  
Max Klimm ◽  
Philipp Warode

We develop algorithms solving parametric flow problems with separable, continuous, piecewise quadratic, and strictly convex cost functions. The parameter to be considered is a common multiplier on the demand of all nodes. Our algorithms compute a family of flows that are each feasible for the respective demand and minimize the costs among the feasible flows for that demand. For single commodity networks with homogenous cost functions, our algorithm requires one matrix multiplication for the initialization, a rank 1 update for each nondegenerate step and the solution of a convex quadratic program for each degenerate step. For nonhomogeneous cost functions, the initialization requires the solution of a convex quadratic program instead. For multi-commodity networks, both the initialization and every step of the algorithm require the solution of a convex program. As each step is mirrored by a breakpoint in the output this yields output-polynomial algorithms in every case.


Author(s):  
Evangelos Markakis ◽  
Georgios Papasotiropoulos

Our work focuses on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections. Although the CMS rule provides much higher levels of expressiveness, this comes at the expense of increased computational complexity. In this work, we study further the issue of efficient algorithms for CMS, and we identify the condition of bounded treewidth (of an appropriate graph that emerges from the provided ballots), as the necessary and sufficient condition for polynomial algorithms, under common complexity assumptions. Additionally we investigate the complexity of problems related to the strategic control of such elections by the possibility of adding or deleting either voters or alternatives. We exhibit that in most variants of these problems, CMS is resistant against control.


Author(s):  
Yacine Izza ◽  
Joao Marques-Silva

Random Forest (RFs) are among the most widely used Machine Learning (ML) classifiers. Even though RFs are not interpretable, there are no dedicated non-heuristic approaches for computing explanations of RFs. Moreover, there is recent work on polynomial algorithms for explaining ML models, including naive Bayes classifiers. Hence, one question is whether finding explanations of RFs can be solved in polynomial time. This paper answers this question negatively, by proving that computing one PI-explanation of an RF is D^P-hard. Furthermore, the paper proposes a propositional encoding for computing explanations of RFs, thus enabling finding PI-explanations with a SAT solver. This contrasts with earlier work on explaining boosted trees (BTs) and neural networks (NNs), which requires encodings based on SMT/MILP. Experimental results, obtained on a wide range of publicly available datasets, demonstrate that the proposed SAT-based approach scales to RFs of sizes common in practical applications. Perhaps more importantly, the experimental results demonstrate that, for the vast majority of examples considered, the SAT-based approach proposed in this paper significantly outperforms existing heuristic approaches.


Author(s):  
Dan Dadush ◽  
László A. Végh ◽  
Giacomo Zambelli

We present a new class of polynomial-time algorithms for submodular function minimization (SFM) as well as a unified framework to obtain strongly polynomial SFM algorithms. Our algorithms are based on simple iterative methods for the minimum-norm problem, such as the conditional gradient and Fujishige–Wolfe algorithms. We exhibit two techniques to turn simple iterative methods into polynomial-time algorithms. First, we adapt the geometric rescaling technique, which has recently gained attention in linear programming, to SFM and obtain a weakly polynomial bound [Formula: see text]. Second, we exhibit a general combinatorial black box approach to turn [Formula: see text]-approximate SFM oracles into strongly polynomial exact SFM algorithms. This framework can be applied to a wide range of combinatorial and continuous algorithms, including pseudo-polynomial ones. In particular, we can obtain strongly polynomial algorithms by a repeated application of the conditional gradient or of the Fujishige–Wolfe algorithm. Combined with the geometric rescaling technique, the black box approach provides an [Formula: see text] algorithm. Finally, we show that one of the techniques we develop in the paper can also be combined with the cutting-plane method of Lee et al., yielding a simplified variant of their [Formula: see text] algorithm.


2021 ◽  
Author(s):  
Mahboubeh Shamsi ◽  
Abdolreza Rasouli Kenari ◽  
Roghayeh Aghamohammadi

Abstract On a graph with a negative cost cycle, the shortest path is undefined, but the number of edges of the shortest negative cost cycle could be computed. It is called Negative Cost Girth (NCG). The NCG problem is applied in many optimization issues such as scheduling and model verification. The existing polynomial algorithms suffer from high computation and memory consumption. In this paper, a powerful Map-Reduce framework implemented to find the NCG of a graph. The proposed algorithm runs in O(log k) parallel time over O(n3) on each Hadoop nodes, where n; k are the size of the graph and the value of NCG, respectively. The Hadoop implementation of the algorithm shows that the total execution time is reduced by 50% compared with polynomial algorithms, especially in large networks concerning increasing the numbers of Hadoop nodes. The result proves the efficiency of the approach for solving the NCG problem to process big data in a parallel and distributed way.


Sign in / Sign up

Export Citation Format

Share Document