submodular maximization
Recently Published Documents


TOTAL DOCUMENTS

140
(FIVE YEARS 82)

H-INDEX

17
(FIVE YEARS 4)

2021 ◽  
Author(s):  
Eric Balkanski ◽  
Aviad Rubinstein ◽  
Yaron Singer

An Exponentially Faster Algorithm for Submodular Maximization Under a Matroid Constraint This paper studies the problem of submodular maximization under a matroid constraint. It is known since the 1970s that the greedy algorithm obtains a constant-factor approximation guarantee for this problem. Twelve years ago, a breakthrough result by Vondrák obtained the optimal 1 − 1/e approximation. Previous algorithms for this fundamental problem all have linear parallel runtime, which was considered impossible to accelerate until recently. The main contribution of this paper is a novel algorithm that provides an exponential speedup in the parallel runtime of submodular maximization under a matroid constraint, without loss in the approximation guarantee.


Author(s):  
Zhicheng Liu ◽  
Hong Chang ◽  
Ran Ma ◽  
Donglei Du ◽  
Xiaoyan Zhang

Abstract We consider a two-stage submodular maximization problem subject to a cardinality constraint and k matroid constraints, where the objective function is the expected difference of a nonnegative monotone submodular function and a nonnegative monotone modular function. We give two bi-factor approximation algorithms for this problem. The first is a deterministic $\left( {{1 \over {k + 1}}\left( {1 - {1 \over {{e^{k + 1}}}}} \right),1} \right)$ -approximation algorithm, and the second is a randomized $\left( {{1 \over {k + 1}}\left( {1 - {1 \over {{e^{k + 1}}}}} \right) - \varepsilon ,1} \right)$ -approximation algorithm with improved time efficiency.


Author(s):  
Christopher Harshaw ◽  
Ehsan Kazemi ◽  
Moran Feldman ◽  
Amin Karbasi

We propose subsampling as a unified algorithmic technique for submodular maximization in centralized and online settings. The idea is simple: independently sample elements from the ground set and use simple combinatorial techniques (such as greedy or local search) on these sampled elements. We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods. In the usual off-line setting, we present SampleGreedy, which obtains a [Formula: see text]-approximation for maximizing a submodular function subject to a p-extendible system using [Formula: see text] evaluation and feasibility queries, where k is the size of the largest feasible set. The approximation ratio improves to p + 1 and p for monotone submodular and linear objectives, respectively. In the streaming setting, we present Sample-Streaming, which obtains a [Formula: see text]-approximation for maximizing a submodular function subject to a p-matchoid using O(k) memory and [Formula: see text] evaluation and feasibility queries per element, and m is the number of matroids defining the p-matchoid. The approximation ratio improves to 4p for monotone submodular objectives. We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.


Author(s):  
Hongchang Gao ◽  
Hanzi Xu ◽  
Slobodan Vucetic

Continuous DR-submodular maximization is an important machine learning problem, which covers numerous popular applications. With the emergence of large-scale distributed data, developing efficient algorithms for the continuous DR-submodular maximization, such as the decentralized Frank-Wolfe method, became an important challenge. However, existing decentralized Frank-Wolfe methods for this kind of problem have the sample complexity of $\mathcal{O}(1/\epsilon^3)$, incurring a large computational overhead. In this paper, we propose two novel sample efficient decentralized Frank-Wolfe methods to address this challenge. Our theoretical results demonstrate that the sample complexity of the two proposed methods is $\mathcal{O}(1/\epsilon^2)$, which is better than $\mathcal{O}(1/\epsilon^3)$ of the existing methods. As far as we know, this is the first published result achieving such a favorable sample complexity. Extensive experimental results confirm the effectiveness of the proposed methods.


Author(s):  
Victoria G. Crawford

In this paper, the monotone submodular maximization problem (SM) is studied. SM is to find a subset of size kappa from a universe of size n that maximizes a monotone submodular objective function f . We show using a novel analysis that the Pareto optimization algorithm achieves a worst-case ratio of (1 − epsilon)(1 − 1/e) in expectation for every cardinality constraint kappa < P , where P ≤ n + 1 is an input, in O(nP ln(1/epsilon)) queries of f . In addition, a novel evolutionary algorithm called the biased Pareto optimization algorithm, is proposed that achieves a worst-case ratio of (1 − epsilon)(1 − 1/e − epsilon) in expectation for every cardinality constraint kappa < P in O(n ln(P ) ln(1/epsilon)) queries of f . Further, the biased Pareto optimization algorithm can be modified in order to achieve a a worst-case ratio of (1 − epsilon)(1 − 1/e − epsilon) in expectation for cardinality constraint kappa in O(n ln(1/epsilon)) queries of f . An empirical evaluation corroborates our theoretical analysis of the algorithms, as the algorithms exceed the stochastic greedy solution value at roughly when one would expect based upon our analysis.


Author(s):  
Tasuku Soma ◽  
Yuichi Yoshida

We present a polynomial-time online algorithm for maximizing the conditional value at risk (CVaR) of a monotone stochastic submodular function. Given T i.i.d. samples from an underlying distribution arriving online, our algorithm produces a sequence of solutions that converges to a (1−1/e)-approximate solution with a convergence rate of O(T −1/4 ) for monotone continuous DR-submodular functions. Compared with previous offline algorithms, which require Ω(T) space, our online algorithm only requires O( √ T) space. We extend our on- line algorithm to portfolio optimization for mono- tone submodular set functions under a matroid constraint. Experiments conducted on real-world datasets demonstrate that our algorithm can rapidly achieve CVaRs that are comparable to those obtained by existing offline algorithms.


Sign in / Sign up

Export Citation Format

Share Document