scholarly journals Quantum-Inspired Wolf Pack Algorithm to Solve the 0-1 Knapsack Problem

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Yangjun Gao ◽  
Fengming Zhang ◽  
Yu Zhao ◽  
Chao Li

This paper proposes a Quantum-Inspired wolf pack algorithm (QWPA) based on quantum encoding to enhance the performance of the wolf pack algorithm (WPA) to solve the 0-1 knapsack problems. There are two important operations in QWPA: quantum rotation and quantum collapse. The first step enables the population to move to the global optima and the second step helps to avoid the trapping of individuals into local optima. Ten classical and four high-dimensional knapsack problems are employed to test the proposed algorithm, and the results are compared with other typical algorithms. The statistical results demonstrate the effectiveness and global search capability for knapsack problems, especially for high-level cases.

Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1233
Author(s):  
Yule Wang ◽  
Wanliang Wang

The knapsack problem is one of the most widely researched NP-complete combinatorial optimization problems and has numerous practical applications. This paper proposes a quantum-inspired differential evolution algorithm with grey wolf optimizer (QDGWO) to enhance the diversity and convergence performance and improve the performance in high-dimensional cases for 0-1 knapsack problems. The proposed algorithm adopts quantum computing principles such as quantum superposition states and quantum gates. It also uses adaptive mutation operations of differential evolution, crossover operations of differential evolution, and quantum observation to generate new solutions as trial individuals. Selection operations are used to determine the better solutions between the stored individuals and the trial individuals created by mutation and crossover operations. In the event that the trial individuals are worse than the current individuals, the adaptive grey wolf optimizer and quantum rotation gate are used to preserve the diversity of the population as well as speed up the search for the global optimal solution. The experimental results for 0-1 knapsack problems confirm the advantages of QDGWO with the effectiveness and global search capability for knapsack problems, especially for high-dimensional situations.


Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


Author(s):  
VLADIMIR NIKULIN ◽  
TIAN-HSIANG HUANG ◽  
GEOFFREY J. MCLACHLAN

The method presented in this paper is novel as a natural combination of two mutually dependent steps. Feature selection is a key element (first step) in our classification system, which was employed during the 2010 International RSCTC data mining (bioinformatics) Challenge. The second step may be implemented using any suitable classifier such as linear regression, support vector machine or neural networks. We conducted leave-one-out (LOO) experiments with several feature selection techniques and classifiers. Based on the LOO evaluations, we decided to use feature selection with the separation type Wilcoxon-based criterion for all final submissions. The method presented in this paper was tested successfully during the RSCTC data mining Challenge, where we achieved the top score in the Basic track.


Author(s):  
Anton Dries ◽  
Angelika Kimmig ◽  
Jesse Davis ◽  
Vaishak Belle ◽  
Luc de Raedt

The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step end-to-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language.In the first step, a question formulated in natural language is analysed and transformed into a high-level model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-to-end evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).


SPE Journal ◽  
2018 ◽  
Vol 23 (05) ◽  
pp. 1496-1517 ◽  
Author(s):  
Chaohui Chen ◽  
Guohua Gao ◽  
Ruijian Li ◽  
Richard Cao ◽  
Tianhong Chen ◽  
...  

Summary Although it is possible to apply traditional optimization algorithms together with the randomized-maximum-likelihood (RML) method to generate multiple conditional realizations, the computation cost is high. This paper presents a novel method to enhance the global-search capability of the distributed-Gauss-Newton (DGN) optimization method and integrates it with the RML method to generate multiple realizations conditioned to production data synchronously. RML generates samples from an approximate posterior by minimizing a large ensemble of perturbed objective functions in which the observed data and prior mean values of uncertain model parameters have been perturbed with Gaussian noise. Rather than performing these minimizations in isolation using large sets of simulations to evaluate the finite-difference approximations of the gradients used to optimize each perturbed realization, we use a concurrent implementation in which simulation results are shared among different minimization tasks whenever these results are helping to converge to the global minimum of a specific minimization task. To improve sharing of results, we relax the accuracy of the finite-difference approximations for the gradients with more widely spaced simulation results. To avoid trapping in local optima, a novel method to enhance the global-search capability of the DGN algorithm is developed and integrated seamlessly with the RML formulation. In this way, we can improve the quality of RML conditional realizations that sample the approximate posterior. The proposed work flow is first validated with a toy problem and then applied to a real-field unconventional asset. Numerical results indicate that the new method is very efficient compared with traditional methods. Hundreds of data-conditioned realizations can be generated in parallel within 20 to 40 iterations. The computational cost (central-processing-unit usage) is reduced significantly compared with the traditional RML approach. The real-field case studies involve a history-matching study to generate history-matched realizations with the proposed method and an uncertainty quantification of production forecasting using those conditioned models. All conditioned models generate production forecasts that are consistent with real-production data in both the history-matching period and the blind-test period. Therefore, the new approach can enhance the confidence level of the estimated-ultimate-recovery (EUR) assessment using production-forecasting results generated from all conditional realizations, resulting in significant business impact.


2021 ◽  
Vol 12 (2) ◽  
pp. 1-15
Author(s):  
Khadoudja Ghanem ◽  
Abdesslem Layeb

Backtracking search optimization algorithm is a recent stochastic-based global search algorithm for solving real-valued numerical optimization problems. In this paper, a binary version of backtracking algorithm is proposed to deal with 0-1 optimization problems such as feature selection and knapsack problems. Feature selection is the process of selecting a subset of relevant features for use in model construction. Irrelevant features can negatively impact model performances. On the other hand, knapsack problem is a well-known optimization problem used to assess discrete algorithms. The objective of this research is to evaluate the discrete version of backtracking algorithm on the two mentioned problems and compare obtained results with other binary optimization algorithms using four usual classifiers: logistic regression, decision tree, random forest, and support vector machine. Empirical study on biological microarray data and experiments on 0-1 knapsack problems show the effectiveness of the binary algorithm and its ability to achieve good quality solutions for both problems.


2011 ◽  
pp. 259-268
Author(s):  
M. V. Ramakrishna ◽  
S. Nepal ◽  
S. Sumanasekara ◽  
S. M.M. Tahaghoghi

Content Based Image Retrieval (CBIR) systems that are able to “retrieve images of Clinton with Lewinsky” are unrealistic at present. However, this area has seen much research and development activity since IBM’s QBIC announcement in 1994. The CHITRA CBIR system under development at the RMIT and Monash Universities, addresses the need for a test bed system. Users can dynamically incorporate new features and similarity measures in to the system, enabling it to act as a testbed for CBIR research. The system uses a 4-level data model we have developed and supports definition and querying of high level concepts such as MOUNTAIN and SUNSET. These advanced capabilities are supported by a powerful graphical query mechanism and a high-dimensional indexing structure based on linear mapping. In this paper we describe the design of the system, our contributions to the state of the art and provide some implementation details.


2015 ◽  
Vol 25 (10) ◽  
pp. 1550127 ◽  
Author(s):  
Yong Wang ◽  
Peng Lei ◽  
Kwok-Wo Wong

Although chaotic maps possess useful properties, such as being highly nonlinear and pseudorandom, for designing S-box, the cryptographic performance of the chaos-based substitution box (S-box) cannot achieve a very high level, especially in nonlinearity. In this paper, two conditions of improving the nonlinearity of S-box are firstly given according to the process of calculating nonlinearity. A novel method combining chaos and optimization operations is proposed for constructing S-box with high nonlinearity. There are three phases in our method. In the first phase, the S-box is initialized by a chaotic map. Then, its nonlinearity is enhanced by an optimization method in the second phase. To avoid the result of falling into local optima, some adjustments are done in the final phase. Experimental results show that the S-boxes constructed by the proposed method have a much higher nonlinearity than those only based on chaotic maps. This justifies that our algorithm is effective in generating S-boxes with high cryptographic performance.


Author(s):  
I Wayan Supriana

Knapsack problems is a problem that often we encounter in everyday life. Knapsack problem itself is a problem where a person faced with the problems of optimization on the selection of objects that can be inserted into the container which has limited space or capacity. Problems knapsack problem can be solved by various optimization algorithms, one of which uses a genetic algorithm. Genetic algorithms in solving problems mimicking the theory of evolution of living creatures. The components of the genetic algorithm is composed of a population consisting of a collection of individuals who are candidates for the solution of problems knapsack. The process of evolution goes dimulasi of the selection process, crossovers and mutations in each individual in order to obtain a new population. The evolutionary process will be repeated until it meets the criteria o f an optimum of the resulting solution. The problems highlighted in this research is how to resolve the problem by applying a genetic algorithm knapsack. The results obtained by the testing of the system is built, that the knapsack problem can optimize the placement of goods in containers or capacity available. Optimizing the knapsack problem can be maximized with the appropriate input parameters.


Sign in / Sign up

Export Citation Format

Share Document