scholarly journals Optimizing Ratio of Monotone Set Functions

Author(s):  
Chao Qian ◽  
Jing-Cheng Shi ◽  
Yang Yu ◽  
Ke Tang ◽  
Zhi-Hua Zhou

This paper considers the problem of minimizing the ratio of two set functions, i.e., $f/g$. Previous work assumed monotone and submodular of the two functions, while we consider a more general situation where $g$ is not necessarily submodular. We derive that the greedy approach GreedRatio, as a fixed time algorithm, achieves a $\frac{|X^*|}{(1+(|X^*| \textendash 1)(1 \textendash \kappa_f))\gamma(g)}$ approximation ratio, which also improves the previous bound for submodular $g$. If more time can be spent, we present the PORM algorithm, an anytime randomized iterative approach minimizing $f$ and $\textendash g$ simultaneously. We show that PORM using reasonable time has the same general approximation guarantee as GreedRatio, but can achieve better solutions in cases and applications.

Author(s):  
Chao Qian ◽  
Jing-Cheng Shi ◽  
Yang Yu ◽  
Ke Tang

This paper considers the subset selection problem with a monotone objective function and a monotone cost constraint, which relaxes the submodular property of previous studies. We first show that the approximation ratio of the generalized greedy algorithm is $\frac{\alpha}{2}(1 \textendash \frac{1}{e^{\alpha}})$ (where $\alpha$ is the submodularity ratio); and then propose POMC, an anytime randomized iterative approach that can utilize more time to find better solutions than the generalized greedy algorithm. We show that POMC can obtain the same general approximation guarantee as the generalized greedy algorithm, but can achieve better solutions in cases and applications.


Author(s):  
Chao Qian ◽  
Chao Feng ◽  
Ke Tang

The problem of selecting a sequence of items from a universe that maximizes some given objective function arises in many real-world applications. In this paper, we propose an anytime randomized iterative approach POSeqSel, which maximizes the given objective function and minimizes the sequence length simultaneously. We prove that for any previously studied objective function, POSeqSel using a reasonable time can always reach or improve the best known approximation guarantee. Empirical results exhibit the superior performance of POSeqSel.


2020 ◽  
Vol 34 (04) ◽  
pp. 3267-3274
Author(s):  
Chao Bian ◽  
Chao Feng ◽  
Chao Qian ◽  
Yang Yu

In this paper, we study the problem of selecting a subset from a ground set to maximize a monotone objective function f such that a monotone cost function c is bounded by an upper limit. State-of-the-art algorithms include the generalized greedy algorithm and POMC. The former is an efficient fixed time algorithm, but the performance is limited by the greedy nature. The latter is an anytime algorithm that can find better subsets using more time, but without any polynomial-time approximation guarantee. In this paper, we propose a new anytime algorithm EAMC, which employs a simple evolutionary algorithm to optimize a surrogate objective integrating f and c. We prove that EAMC achieves the best known approximation guarantee in polynomial expected running time. Experimental results on the applications of maximum coverage, influence maximization and sensor placement show the excellent performance of EAMC.


2020 ◽  
Vol 92 (1) ◽  
pp. 107-132 ◽  
Author(s):  
Britta Schulze ◽  
Michael Stiglmayr ◽  
Luís Paquete ◽  
Carlos M. Fonseca ◽  
David Willems ◽  
...  

Abstract In this article, we introduce the rectangular knapsack problem as a special case of the quadratic knapsack problem consisting in the maximization of the product of two separate knapsack profits subject to a cardinality constraint. We propose a polynomial time algorithm for this problem that provides a constant approximation ratio of 4.5. Our experimental results on a large number of artificially generated problem instances show that the average ratio is far from theoretical guarantee. In addition, we suggest refined versions of this approximation algorithm with the same time complexity and approximation ratio that lead to even better experimental results.


2005 ◽  
Vol DMTCS Proceedings vol. AE,... (Proceedings) ◽  
Author(s):  
Gordana Manić ◽  
Yoshiko Wakabayashi

International audience We consider the problems of finding the maximum number of vertex-disjoint triangles (VTP) and edge-disjoint triangles (ETP) in a simple graph. Both problems are NP-hard. The algorithm with the best approximation guarantee known so far for these problems has ratio $3/2 + ɛ$, a result that follows from a more general algorithm for set packing obtained by Hurkens and Schrijver in 1989. We present improvements on the approximation ratio for restricted cases of VTP and ETP that are known to be APX-hard: we give an approximation algorithm for VTP on graphs with maximum degree 4 with ratio slightly less than 1.2, and for ETP on graphs with maximum degree 5 with ratio 4/3. We also present an exact linear-time algorithm for VTP on the class of indifference graphs.


Author(s):  
Xingnan Wen ◽  
Sitian Qin

AbstractMulti-agent systems are widely studied due to its ability of solving complex tasks in many fields, especially in deep reinforcement learning. Recently, distributed optimization problem over multi-agent systems has drawn much attention because of its extensive applications. This paper presents a projection-based continuous-time algorithm for solving convex distributed optimization problem with equality and inequality constraints over multi-agent systems. The distinguishing feature of such problem lies in the fact that each agent with private local cost function and constraints can only communicate with its neighbors. All agents aim to cooperatively optimize a sum of local cost functions. By the aid of penalty method, the states of the proposed algorithm will enter equality constraint set in fixed time and ultimately converge to an optimal solution to the objective problem. In contrast to some existed approaches, the continuous-time algorithm has fewer state variables and the testification of the consensus is also involved in the proof of convergence. Ultimately, two simulations are given to show the viability of the algorithm.


Author(s):  
Felix Happach

AbstractWe consider a variant of the NP-hard problem of assigning jobs to machines to minimize the completion time of the last job. Usually, precedence constraints are given by a partial order on the set of jobs, and each job requires all its predecessors to be completed before it can start. In this paper, we consider a different type of precedence relation that has not been discussed as extensively and is called OR-precedence. In order for a job to start, we require that at least one of its predecessors is completed—in contrast to all its predecessors. Additionally, we assume that each job has a release date before which it must not start. We prove that a simple List Scheduling algorithm due to Graham (Bell Syst Tech J 45(9):1563–1581, 1966) has an approximation guarantee of 2 and show that obtaining an approximation factor of $$4/3 - \varepsilon $$ 4 / 3 - ε is NP-hard. Further, we present a polynomial-time algorithm that solves the problem to optimality if preemptions are allowed. The latter result is in contrast to classical precedence constraints where the preemptive variant is already NP-hard. Our algorithm generalizes previous results for unit processing time jobs subject to OR-precedence constraints, but without release dates. The running time of our algorithm is $$O(n^2)$$ O ( n 2 ) for arbitrary processing times and it can be reduced to O(n) for unit processing times, where n is the number of jobs. The performance guarantees presented here match the best-known ones for special cases where classical precedence constraints and OR-precedence constraints coincide.


Author(s):  
Chunying Ren ◽  
Dachuan Xu ◽  
Donglei Du ◽  
Min Li

Abstract In the k-means problem with penalties, we are given a data set ${\cal D} \subseteq \mathbb{R}^\ell $ of n points where each point $j \in {\cal D}$ is associated with a penalty cost p j and an integer k. The goal is to choose a set ${\rm{C}}S \subseteq {{\cal R}^\ell }$ with |CS| ≤ k and a penalized subset ${{\cal D}_p} \subseteq {\cal D}$ to minimize the sum of the total squared distance from the points in D / D p to CS and the total penalty cost of points in D p , namely $\sum\nolimits_{j \in {\cal D}\backslash {{\cal D}_p}} {d^2}(j,{\rm{C}}S) + \sum\nolimits_{j \in {{\cal D}_p}} {p_j}$ . We employ the primal-dual technique to give a pseudo-polynomial time algorithm with an approximation ratio of (6.357+ε) for the k-means problem with penalties, improving the previous best approximation ratio 19.849+∊ for this problem given by Feng et al. in Proceedings of FAW (2019).


2004 ◽  
Vol 14 (01n02) ◽  
pp. 85-104 ◽  
Author(s):  
XIAODONG WU ◽  
DANNY Z. CHEN ◽  
JAMES J. MASON ◽  
STEVEN R. SCHMID

Data clustering is an important theoretical topic and a sharp tool for various applications. It is a task frequently arising in geometric computing. The main objective of data clustering is to partition a given data set into clusters such that the data items within the same cluster are "more" similar to each other with respect to certain measures. In this paper, we study the pairwise data clustering problem with pairwise similarity/dissimilarity measures that need not satisfy the triangle inequality. By using a criterion, called the minimum normalized cut, we model the general pairwise data clustering problem as a graph partition problem. The graph partition problem based on minimizing the normalized cut is known to be NP-hard. For an undirected weighted graph of n vertices, we present a ((4+o(1)) In n)-approximation polynomial time algorithm for the minimum normalized cut problem; this is the first provably good approximation polynomial time algorithm for the problem. We also give a more efficient algorithm for this problem by sacrificing the approximation ratio slightly. Further, our scheme achieves a ((2+o(1)) In n)-approximation polynomial time algorithm for computing the sparsest cuts in edge-weighted and vertex-weighted undirected graphs, improving the previously best known approximation ratio by a constant factor. Some applications and implementation work of our approximation normalized cut algorithms are also discussed.


Risks ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 105
Author(s):  
Lane P. Hughston ◽  
Leandro Sánchez-Betancourt

In the information-based pricing framework of Brody, Hughston & Macrina, the market filtration {Ft}t≥0 is generated by an information process {ξt}t≥0 defined in such a way that at some fixed time T an FT-measurable random variable XT is “revealed”. A cash flow HT is taken to depend on the market factor XT, and one considers the valuation of a financial asset that delivers HT at time T. The value of the asset St at any time t∈[0,T) is the discounted conditional expectation of HT with respect to Ft, where the expectation is under the risk neutral measure and the interest rate is constant. Then ST−=HT, and St=0 for t≥T. In the general situation one has a countable number of cash flows, and each cash flow can depend on a vector of market factors, each associated with an information process. In the present work we introduce a new process, which we call the normalized variance-gamma bridge. We show that the normalized variance-gamma bridge and the associated gamma bridge are jointly Markovian. From these processes, together with the specification of a market factor XT, we construct a so-called variance-gamma information process. The filtration is then taken to be generated by the information process together with the gamma bridge. We show that the resulting extended information process has the Markov property and hence can be used to develop pricing models for a variety of different financial assets, several examples of which are discussed in detail.


Sign in / Sign up

Export Citation Format

Share Document