What Does a Deterministic Algorithm Need to Do to Locate a Global Optimizer?

Author(s):  
M. Sun ◽  
X. Yang
Author(s):  
Kai Han ◽  
Shuang Cui ◽  
Tianshuai Zhu ◽  
Enpei Zhang ◽  
Benwei Wu ◽  
...  

Data summarization, i.e., selecting representative subsets of manageable size out of massive data, is often modeled as a submodular optimization problem. Although there exist extensive algorithms for submodular optimization, many of them incur large computational overheads and hence are not suitable for mining big data. In this work, we consider the fundamental problem of (non-monotone) submodular function maximization with a knapsack constraint, and propose simple yet effective and efficient algorithms for it. Specifically, we propose a deterministic algorithm with approximation ratio 6 and a randomized algorithm with approximation ratio 4, and show that both of them can be accelerated to achieve nearly linear running time at the cost of weakening the approximation ratio by an additive factor of ε. We then consider a more restrictive setting without full access to the whole dataset, and propose streaming algorithms with approximation ratios of 8+ε and 6+ε that make one pass and two passes over the data stream, respectively. As a by-product, we also propose a two-pass streaming algorithm with an approximation ratio of 2+ε when the considered submodular function is monotone. To the best of our knowledge, our algorithms achieve the best performance bounds compared to the state-of-the-art approximation algorithms with efficient implementation for the same problem. Finally, we evaluate our algorithms in two concrete submodular data summarization applications for revenue maximization in social networks and image summarization, and the empirical results show that our algorithms outperform the existing ones in terms of both effectiveness and efficiency.


1985 ◽  
Vol V (2) ◽  
pp. 355-366
Author(s):  
D. A. Taffs ◽  
M. W. Taffs ◽  
J. C. Rienzo ◽  
T. R. Hampson
Keyword(s):  

2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2021 ◽  
Vol 11 (10) ◽  
pp. 4607
Author(s):  
Xiaozhou Guo ◽  
Yi Liu ◽  
Kaijun Tan ◽  
Wenyu Mao ◽  
Min Jin ◽  
...  

In password guessing, the Markov model is still widely used due to its simple structure and fast inference speed. However, the Markov model based on random sampling to generate passwords has the problem of a high repetition rate, which leads to a low cover rate. The model based on enumeration has a lower cover rate for high-probability passwords, and it is a deterministic algorithm that always generates the same passwords in the same order, making it vulnerable to attack. We design a dynamic distribution mechanism based on the random sampling method. This mechanism enables the probability distribution of passwords to be dynamically adjusted and tend toward uniform distribution strictly during the generation process. We apply the dynamic distribution mechanism to the Markov model and propose a dynamic Markov model. Through comparative experiments on the RockYou dataset, we set the optimal adjustment degree α. Compared with the Markov model without the dynamic distribution mechanism, the dynamic Markov model reduced the repetition rate from 75.88% to 66.50% and increased the cover rate from 37.65% to 43.49%. In addition, the dynamic Markov model had the highest cover rate for high-probability passwords. Finally, the model avoided the lack of a deterministic algorithm, and when it was run five times, it reached almost the same cover rate as OMEN.


Author(s):  
Marcin Bienkowski ◽  
Artur Kraska ◽  
Hsiang-Hsuan Liu ◽  
Paweł Schmidt

2013 ◽  
Vol 24 (06) ◽  
pp. 1350035
Author(s):  
JANUSZ MALINOWSKI ◽  
JAN W. KANTELHARDT ◽  
KRZYSZTOF KUŁAKOWSKI

A few ant robots are placed in a labyrinth, formed by a square lattice with a small number of corridors removed. Ants move according to a deterministic algorithm designed to explore all corridors. Each ant remembers the shape of corridors which it has visited. Once two ants meet, they share the information acquired. We evaluate how the time of getting a complete information by an ant depends on the number of ants, and how the length known by an ant depends on time. Numerical results are presented in the form of scaling relations.


2018 ◽  
Vol 8 (10) ◽  
pp. 1945 ◽  
Author(s):  
Tarik Eltaeib ◽  
Ausif Mahmood

Differential evolution (DE) has been extensively used in optimization studies since its development in 1995 because of its reputation as an effective global optimizer. DE is a population-based metaheuristic technique that develops numerical vectors to solve optimization problems. DE strategies have a significant impact on DE performance and play a vital role in achieving stochastic global optimization. However, DE is highly dependent on the control parameters involved. In practice, the fine-tuning of these parameters is not always easy. Here, we discuss the improvements and developments that have been made to DE algorithms. In particular, we present a state-of-the-art survey of the literature on DE and its recent advances, such as the development of adaptive, self-adaptive and hybrid techniques.


Sign in / Sign up

Export Citation Format

Share Document