Greedy Algorithm for Maximization of Non-submodular Functions Subject to Knapsack Constraint

Author(s):  
Zhenning Zhang ◽  
Bin Liu ◽  
Yishui Wang ◽  
Dachuan Xu ◽  
Dongmei Zhang
Author(s):  
Jing Tang ◽  
Xueyan Tang ◽  
Andrew Lim ◽  
Kai Han ◽  
Chongshou Li ◽  
...  

Monotone submodular maximization with a knapsack constraint is NP-hard. Various approximation algorithms have been devised to address this optimization problem. In this paper, we revisit the widely known modified greedy algorithm. First, we show that this algorithm can achieve an approximation factor of 0.405, which significantly improves the known factors of 0.357 given by Wolsey and (1-1/e)/2\approx 0.316 given by Khuller et al. More importantly, our analysis closes a gap in Khuller et al.'s proof for the extensively mentioned approximation factor of (1-1/\sqrte )\approx 0.393 in the literature to clarify a long-standing misconception on this issue. Second, we enhance the modified greedy algorithm to derive a data-dependent upper bound on the optimum. We empirically demonstrate the tightness of our upper bound with a real-world application. The bound enables us to obtain a data-dependent ratio typically much higher than 0.405 between the solution value of the modified greedy algorithm and the optimum. It can also be used to significantly improve the efficiency of algorithms such as branch and bound.


2013 ◽  
Vol 30 (4) ◽  
pp. 1107-1124
Author(s):  
Zengfu Wang ◽  
Bill Moran ◽  
Xuezhi Wang ◽  
Quan Pan

2020 ◽  
Vol 34 (03) ◽  
pp. 2611-2620
Author(s):  
Abir De ◽  
Paramita Koley ◽  
Niloy Ganguly ◽  
Manuel Gomez-Rodriguez

Decisions are increasingly taken by both humans and machine learning models. However, machine learning models are currently trained for full automation—they are not aware that some of the decisions may still be taken by humans. In this paper, we take a first step towards the development of machine learning models that are optimized to operate under different automation levels. More specifically, we first introduce the problem of ridge regression under human assistance and show that it is NP-hard. Then, we derive an alternative representation of the corresponding objective function as a difference of nondecreasing submodular functions. Building on this representation, we further show that the objective is nondecreasing and satisfies α-submodularity, a recently introduced notion of approximate submodularity. These properties allow a simple and efficient greedy algorithm to enjoy approximation guarantees at solving the problem. Experiments on synthetic and real-world data from two important applications—medical diagnosis and content moderation—demonstrate that the greedy algorithm beats several competitive baselines.


Algorithmica ◽  
2019 ◽  
Vol 82 (4) ◽  
pp. 1006-1032
Author(s):  
Chien-Chung Huang ◽  
Naonori Kakimura ◽  
Yuichi Yoshida

2021 ◽  
Author(s):  
Saeed Alaei ◽  
Ali Makhdoumi ◽  
Azarakhsh Malekian

Motivated by applications in online advertising, we consider a class of maximization problems where the objective is a function of the sequence of actions and the running duration of each action. For these problems, we introduce the concepts of sequence-submodularity and sequence-monotonicity, which extend the notions of submodularity and monotonicity from functions defined over sets to functions defined over sequences. We establish that if the objective function is sequence-submodular and sequence-nondecreasing, then there exists a greedy algorithm that achieves [Formula: see text] of the optimal solution. We apply our algorithm and analysis to two applications in online advertising: online ad allocation and query rewriting. We first show that both problems can be formulated as maximizing nondecreasing sequence-submodular functions. We then apply our framework to these two problems, leading to simple greedy approaches with guaranteed performances. In particular, for the online ad allocation problem, the performance of our algorithm is [Formula: see text], which matches the best known existing performance, and for the query rewriting problem, the performance of our algorithm is [Formula: see text], which improves on the best known existing performance in the literature. This paper was accepted by Chung Piaw Teo, optimization.


2019 ◽  
Vol 12 (01) ◽  
pp. 2050007 ◽  
Author(s):  
Shuyang Gu ◽  
Ganquan Shi ◽  
Weili Wu ◽  
Changhong Lu

We study the problem of maximizing non-monotone diminish return (DR)-submodular function on the bounded integer lattice, which is a generalization of submodular set function. DR-submodular functions consider the case that we can choose multiple copies for each element in the ground set. This generalization has many applications in machine learning. In this paper, we propose a [Formula: see text]-approximation algorithm with a running time of [Formula: see text], where [Formula: see text] is the size of the ground set, [Formula: see text] is the upper bound of integer lattice. Discovering important properties of DR-submodular function, we propose a fast double greedy algorithm which improves the running time.


Sign in / Sign up

Export Citation Format

Share Document