scholarly journals Piecewise Linear Valued CSPs Solvable by Linear Programming Relaxation

2022 ◽  
Vol 23 (1) ◽  
pp. 1-35
Author(s):  
Manuel Bodirsky ◽  
Marcello Mamino ◽  
Caterina Viola

Valued constraint satisfaction problems (VCSPs) are a large class of combinatorial optimisation problems. The computational complexity of VCSPs depends on the set of allowed cost functions in the input. Recently, the computational complexity of all VCSPs for finite sets of cost functions over finite domains has been classified. Many natural optimisation problems, however, cannot be formulated as VCSPs over a finite domain. We initiate the systematic investigation of the complexity of infinite-domain VCSPs with piecewise linear homogeneous cost functions. Such VCSPs can be solved in polynomial time if the cost functions are improved by fully symmetric fractional operations of all arities. We show this by reducing the problem to a finite-domain VCSP which can be solved using the basic linear program relaxation. It follows that VCSPs for submodular PLH cost functions can be solved in polynomial time; in fact, we show that submodular PLH functions form a maximally tractable class of PLH cost functions.

Author(s):  
Roberto Barbuti ◽  
Anna Bernasconi ◽  
Roberta Gori ◽  
Paolo Milazzo

Abstract In reaction systems, preimages and nth ancestors are sets of reactants leading to the production of a target set of products in either 1 or n steps, respectively. Many computational problems on preimages and ancestors, such as finding all minimum-cardinality nth ancestors, computing their size or counting them, are intractable. In this paper, we characterize all nth ancestors using a Boolean formula that can be computed in polynomial time. Once simplified, this formula can be exploited to easily solve all preimage and ancestor problems. This allows us to directly relate the difficulty of ancestor problems to the cost of the simplification so that new insights into computational complexity investigations can be achieved. In particular, we focus on two problems: (i) deciding whether a preimage/nth ancestor exists and (ii) finding a preimage/nth ancestor of minimal size. Our approach is constructive, it aims at finding classes of reactions systems for which the ancestor problems can be solved in polynomial time, in exact or approximate way.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-32
Author(s):  
Peter Jonsson ◽  
Victor Lagerkvist ◽  
Biman Roy

We study the constraint satisfaction problem (CSP) parameterized by a constraint language Γ (CSPΓ) and how the choice of Γ affects its worst-case time complexity. Under the exponential-time hypothesis (ETH), we rule out the existence of subexponential algorithms for finite-domain NP-complete CSPΓ problems. This extends to certain infinite-domain CSPs and structurally restricted problems. For CSPs with finite domain D and where all unary relations are available, we identify a relation S D such that the time complexity of the NP-complete problem CSP({ S D }) is a lower bound for all NP-complete CSPs of this kind. We also prove that the time complexity of CSP({ S D }) strictly decreases when |D| increases (unless the ETH is false) and provide stronger complexity results in the special case when |D|=3.


1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


2021 ◽  
Vol 11 (15) ◽  
pp. 7007
Author(s):  
Janusz P. Paplinski ◽  
Aleksandr Cariow

This article presents an efficient algorithm for computing a 10-point DFT. The proposed algorithm reduces the number of multiplications at the cost of a slight increase in the number of additions in comparison with the known algorithms. Using a 10-point DFT for harmonic power system analysis can improve accuracy and reduce errors caused by spectral leakage. This paper compares the computational complexity for an L×10M-point DFT with a 2M-point DFT.


2010 ◽  
Vol 56 (No. 5) ◽  
pp. 201-208 ◽  
Author(s):  
M. Beranová ◽  
D. Martinovičová

The costs functions are mentioned mostly in the relation to the Break-even Analysis where they are presented in the linear form. But there exist several different types and forms of cost functions. Fist of all, it is necessary to distinguish between the short-run and long-run cost function that are both very important tools of the managerial decision making even if each one is used on a different level of management. Also several methods of estimation of the cost function's parameters are elaborated in the literature. But all these methods are based on the past data taken from the financial accounting while the financial accounting is not able to separate the fixed and variable costs and it is also strongly adjusted to taxation in the many companies. As a tool of the managerial decision making support, the cost functions should provide a vision to the future where many factors of risk and uncertainty influence economic results. Consequently, these random factors should be considered in the construction of cost functions, especially in the long-run. In order to quantify the influences of these risks and uncertainties, the authors submit the application of the Bayesian Theorem.


Energies ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 2885
Author(s):  
Daniel Losada ◽  
Ameena Al-Sumaiti ◽  
Sergio Rivera

This article presents the development, simulation and validation of the uncertainty cost functions for a commercial building with climate-dependent controllable loads, located in Florida, USA. For its development, statistical data on the energy consumption of the building in 2016 were used, along with the deployment of kernel density estimator to characterize its probabilistic behavior. For validation of the uncertainty cost functions, the Monte-Carlo simulation method was used to make comparisons between the analytical results and the results obtained by the method. The cost functions found differential errors of less than 1%, compared to the Monte-Carlo simulation method. With this, there is an analytical approach to the uncertainty costs of the building that can be used in the development of optimal energy dispatches, as well as a complementary method for the probabilistic characterization of the stochastic behavior of agents in the electricity sector.


2012 ◽  
Vol 239-240 ◽  
pp. 1522-1527
Author(s):  
Wen Bo Wu ◽  
Yu Fu Jia ◽  
Hong Xing Sun

The bottleneck assignment (BA) and the generalized assignment (GA) problems and their exact solutions are explored in this paper. Firstly, a determinant elimination (DE) method is proposed based on the discussion of the time and space complexity of the enumeration method for both BA and GA problems. The optimization algorithm to the pre-assignment problem is then discussed and the adjusting and transformation to the cost matrix is adopted to reduce the computational complexity of the DE method. Finally, a synthesis method for both BA and GA problems is presented. The numerical experiments are carried out and the results indicate that the proposed method is feasible and of high efficiency.


2012 ◽  
Vol 67 (12) ◽  
pp. 665-673 ◽  
Author(s):  
Kourosh Parand ◽  
Mehran Nikarya ◽  
Jamal Amani Rad ◽  
Fatemeh Baharifard

In this paper, a new numerical algorithm is introduced to solve the Blasius equation, which is a third-order nonlinear ordinary differential equation arising in the problem of two-dimensional steady state laminar viscous flow over a semi-infinite flat plate. The proposed approach is based on the first kind of Bessel functions collocation method. The first kind of Bessel function is an infinite series, defined on ℝ and is convergent for any x ∊ℝ. In this work, we solve the problem on semi-infinite domain without any domain truncation, variable transformation basis functions or transformation of the domain of the problem to a finite domain. This method reduces the solution of a nonlinear problem to the solution of a system of nonlinear algebraic equations. To illustrate the reliability of this method, we compare the numerical results of the present method with some well-known results in order to show the applicability and efficiency of our method.


2018 ◽  
Vol 2018 ◽  
pp. 1-14
Author(s):  
José Carlos Ortiz-Bayliss ◽  
Ivan Amaya ◽  
Santiago Enrique Conant-Pablos ◽  
Hugo Terashima-Marín

When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Konstantinos Koufos ◽  
Riku Jäntti

The key bottleneck for secondary spectrum usage is the aggregate interference to the primary system receivers due to simultaneous secondary transmissions. Existing power allocation algorithms for multiple secondary transmitters in the TV white space either fail to protect the TV service in all cases or they allocate extremely low power levels to some of the transmitters. In this paper, we propose a power allocation algorithm that favors equally the secondary transmitters and it is able to protect the TV service in all cases. When the number of secondary transmitters is high, the computational complexity of the proposed algorithm becomes high too. We show how the algorithm could be modified to reduce its computational complexity at the cost of negligible performance loss. The modified algorithm could permit a spectrum allocation database to allocate near optimal transmit power levels to tens of thousands of secondary transmitters in real time. In addition, we describe how the modified algorithm could be applied to allow decentralized power allocation for mobile secondary transmitters. In that case, the proposed algorithm outperforms the existing algorithms because it allows reducing the communication signalling overhead between mobile secondary transmitters and the spectrum allocation database.


Sign in / Sign up

Export Citation Format

Share Document