On molecular approximation algorithms for NP optimization problem

Author(s):  
Bin Fu ◽  
Richard Beigel
Author(s):  
Jing Tang ◽  
Xueyan Tang ◽  
Andrew Lim ◽  
Kai Han ◽  
Chongshou Li ◽  
...  

Monotone submodular maximization with a knapsack constraint is NP-hard. Various approximation algorithms have been devised to address this optimization problem. In this paper, we revisit the widely known modified greedy algorithm. First, we show that this algorithm can achieve an approximation factor of 0.405, which significantly improves the known factors of 0.357 given by Wolsey and (1-1/e)/2\approx 0.316 given by Khuller et al. More importantly, our analysis closes a gap in Khuller et al.'s proof for the extensively mentioned approximation factor of (1-1/\sqrte )\approx 0.393 in the literature to clarify a long-standing misconception on this issue. Second, we enhance the modified greedy algorithm to derive a data-dependent upper bound on the optimum. We empirically demonstrate the tightness of our upper bound with a real-world application. The bound enables us to obtain a data-dependent ratio typically much higher than 0.405 between the solution value of the modified greedy algorithm and the optimum. It can also be used to significantly improve the efficiency of algorithms such as branch and bound.


1993 ◽  
Vol 04 (02) ◽  
pp. 117-133
Author(s):  
IAIN A. STEWART

We look at well-known polynomial-time approximation algorithms for the optimization problem MAX-CLIQUE (“find the size of the largest clique in a graph”) with regard to how easy it is to compute the actual cliques yielded by these approximation algorithms. We show that even for two “pretty useless” deterministic polynomial-time approximation algorithms, it is unlikely that the resulting clique can be computed efficiently in parallel. We also show that for each non-deterministic algorithm, it is unlikely that there is some deterministic polynomial-time algorithm that decides whether any given vertex appears in some clique yielded by that nondeterministic algorithm.


Author(s):  
Haoming Li ◽  
Sujoy Sikdar ◽  
Rohit Vaish ◽  
Junming Wang ◽  
Lirong Xia ◽  
...  

Consider the following problem faced by an online voting platform: A user is provided with a list of alternatives, and is asked to rank them in order of preference using only drag-and-drop operations. The platform's goal is to recommend an initial ranking that minimizes the time spent by the user in arriving at her desired ranking. We develop the first optimization framework to address this problem, and make theoretical as well as practical contributions. On the practical side, our experiments on the Amazon Mechanical Turk platform provide two interesting insights about user behavior: First, that users' ranking strategies closely resemble selection or insertion sort, and second, that the time taken for a drag-and-drop operation depends linearly on the number of positions moved. These insights directly motivate our theoretical model of the optimization problem. We show that computing an optimal recommendation is NP-hard, and provide exact and approximation algorithms for a variety of special cases of the problem. Experimental evaluation on MTurk shows that, compared to a random recommendation strategy, the proposed approach reduces the (average) time-to-rank by up to 50%.


Author(s):  
Zhenyu A. Liao ◽  
Charupriya Sharma ◽  
James Cussens ◽  
Peter Van Beek

A Bayesian network is a widely used probabilistic graphical model with applications in knowledge discovery and prediction. Learning a Bayesian network (BN) from data can be cast as an optimization problem using the well-known score-andsearch approach. However, selecting a single model (i.e., the best scoring BN) can be misleading or may not achieve the best possible accuracy. An alternative to committing to a single model is to perform some form of Bayesian or frequentist model averaging, where the space of possible BNs is sampled or enumerated in some fashion. Unfortunately, existing approaches for model averaging either severely restrict the structure of the Bayesian network or have only been shown to scale to networks with fewer than 30 random variables. In this paper, we propose a novel approach to model averaging inspired by performance guarantees in approximation algorithms. Our approach has two primary advantages. First, our approach only considers credible models in that they are optimal or near-optimal in score. Second, our approach is more efficient and scales to significantly larger Bayesian networks than existing approaches.


TAPPI Journal ◽  
2019 ◽  
Vol 18 (10) ◽  
pp. 607-618
Author(s):  
JÉSSICA MOREIRA ◽  
BRUNO LACERDA DE OLIVEIRA CAMPOS ◽  
ESLY FERREIRA DA COSTA JUNIOR ◽  
ANDRÉA OLIVEIRA SOUZA DA COSTA

The multiple effect evaporator (MEE) is an energy intensive step in the kraft pulping process. The exergetic analysis can be useful for locating irreversibilities in the process and pointing out which equipment is less efficient, and it could also be the object of optimization studies. In the present work, each evaporator of a real kraft system has been individually described using mass balance and thermodynamics principles (the first and the second laws). Real data from a kraft MEE were collected from a Brazilian plant and were used for the estimation of heat transfer coefficients in a nonlinear optimization problem, as well as for the validation of the model. An exergetic analysis was made for each effect individually, which resulted in effects 1A and 1B being the least efficient, and therefore having the greatest potential for improvement. A sensibility analysis was also performed, showing that steam temperature and liquor input flow rate are sensible parameters.


2020 ◽  
Vol 2020 (14) ◽  
pp. 306-1-306-6
Author(s):  
Florian Schiffers ◽  
Lionel Fiske ◽  
Pablo Ruiz ◽  
Aggelos K. Katsaggelos ◽  
Oliver Cossairt

Imaging through scattering media finds applications in diverse fields from biomedicine to autonomous driving. However, interpreting the resulting images is difficult due to blur caused by the scattering of photons within the medium. Transient information, captured with fast temporal sensors, can be used to significantly improve the quality of images acquired in scattering conditions. Photon scattering, within a highly scattering media, is well modeled by the diffusion approximation of the Radiative Transport Equation (RTE). Its solution is easily derived which can be interpreted as a Spatio-Temporal Point Spread Function (STPSF). In this paper, we first discuss the properties of the ST-PSF and subsequently use this knowledge to simulate transient imaging through highly scattering media. We then propose a framework to invert the forward model, which assumes Poisson noise, to recover a noise-free, unblurred image by solving an optimization problem.


Sign in / Sign up

Export Citation Format

Share Document