scholarly journals A Case Study of Controlling Crossover in a Selection Hyper-heuristic Framework Using the Multidimensional Knapsack Problem

2016 ◽  
Vol 24 (1) ◽  
pp. 113-141 ◽  
Author(s):  
John H. Drake ◽  
Ender Özcan ◽  
Edmund K. Burke

Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.

Kybernetes ◽  
2014 ◽  
Vol 43 (9/10) ◽  
pp. 1500-1511 ◽  
Author(s):  
John H Drake ◽  
Matthew Hyde ◽  
Khaled Ibrahim ◽  
Ender Ozcan

Purpose – Hyper-heuristics are a class of high-level search techniques which operate on a search space of heuristics rather than directly on a search space of solutions. The purpose of this paper is to investigate the suitability of using genetic programming as a hyper-heuristic methodology to generate constructive heuristics to solve the multidimensional 0-1 knapsack problem Design/methodology/approach – Early hyper-heuristics focused on selecting and applying a low-level heuristic at each stage of a search. Recent trends in hyper-heuristic research have led to a number of approaches being developed to automatically generate new heuristics from a set of heuristic components. A population of heuristics to rank knapsack items are trained on a subset of test problems and then applied to unseen instances. Findings – The results over a set of standard benchmarks show that genetic programming can be used to generate constructive heuristics which yield human-competitive results. Originality/value – In this work the authors show that genetic programming is suitable as a method to generate reusable constructive heuristics for the multidimensional 0-1 knapsack problem. This is classified as a hyper-heuristic approach as it operates on a search space of heuristics rather than a search space of solutions. To our knowledge, this is the first time in the literature a GP hyper-heuristic has been used to solve the multidimensional 0-1 knapsack problem. The results suggest that using GP to evolve ranking mechanisms merits further future research effort.


Author(s):  
Yun Lu ◽  
Bryan McNally ◽  
Emre Shively-Ertas ◽  
Francis J. Vasko

The 0-1 Multidimensional Knapsack Problem (MKP) is a NP-Hard problem that has important applications in business and industry. Approximate solution approaches for the MKP in the literature typically provide no guarantee on how close generated solutions are to the optimum. This article demonstrates how general-purpose integer programming software (Gurobi) is iteratively used to generate solutions for the 270 MKP test problems in Beasley’s OR-Library such that, on average, the solutions are guaranteed to be within 0.094% of the optimums and execute in 88 seconds on a standard PC. This methodology, called the simple sequential increasing tolerance (SSIT) matheuristic, uses a sequence of increasing tolerances in Gurobi to generate a solution that is guaranteed to be close to the optimum in a short time. This solution strategy generates bounded solutions in a timely manner without requiring the coding of a problem-specific algorithm. The SSIT results (although guaranteed within 0.094% of the optimums) when compared to known optimums deviated only 0.006% from the optimums—far better than any published results for these 270 MKP test instances.


2019 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Soukaina Laabadi ◽  
Mohamed Naimi ◽  
Hassan El Amri ◽  
Boujemâa Achchab

Purpose The purpose of this paper is to provide an improved genetic algorithm to solve 0/1 multidimensional knapsack problem (0/1 MKP), by proposing new selection and crossover operators that cooperate to explore the search space. Design/methodology/approach The authors first present a new sexual selection strategy that significantly improves the one proposed by (Varnamkhasti and Lee, 2012), while working in phenotype space. Then they propose two variants of the two-stage recombination operator of (Aghezzaf and Naimi, 2009), while they adapt the latter in the context of 0/1 MKP. The authors evaluate the efficiency of both proposed operators on a large set of 0/1 MKP benchmark instances. The obtained results are compared against that of conventional selection and crossover operators, in terms of solution quality and computing time. Findings The paper shows that the proposed selection respects the two major factors of any metaheuristic: exploration and exploitation aspects. Furthermore, the first variant of the two-stage recombination operator pushes the search space towards exploitation, while the second variant increases the genetic diversity. The paper then demonstrates that the improved genetic algorithm combining the two proposed operators is a competitive method for solving the 0/1 MKP. Practical implications Although only 0/1 MKP standard instances were tested in the empirical experiments in this paper, the improved genetic algorithm can be used as a powerful tool to solve many real-world applications of 0/1 MKP, as the latter models several industrial and investment issues. Moreover, the proposed selection and crossover operators can be incorporated into other bio-inspired algorithms to improve their performance. Furthermore, the two proposed operators can be adapted to solve other binary combinatorial optimization problems. Originality/value This research study provides an effective solution for a well-known non-deterministic polynomial-time (NP)-hard combinatorial optimization problem; that is 0/1 MKP, by tackling it with an improved genetic algorithm. The proposed evolutionary mechanism is based on two new genetic operators. The first proposed operator is a new and deeply different variant of the so-called sexual selection that has been rarely addressed in the literature. The second proposed operator is an adaptation of the two-stage recombination operator in the 0/1 MKP context. This adaptation results in two variants of the two-stage recombination operator that aim to improve the quality of encountered solutions, while taking advantage of the sexual selection criteria to prevent the classical issue of genetic algorithm that is premature convergence.


2021 ◽  
Author(s):  
◽  
Bryan Loh

<p>Computational design tools enable designers to construct and manipulate representations of design artifacts to arrive at a solution. However, the constraints of deterministic programming impose a high cost of tedium and inflexibility to exploring design alternatives through these models. They require designers to express high-level design intent through sequences of low-level operations. Generative neural networks are able to construct generalised models of images which capture principles implicit within them. The latent spaces of these models can be sampled to create novel images and to perform semantic operations. This presents the opportunity for more meaningful and efficient design experimentation, where designers are able to express design intent through principles inferred by the model, instead of sequences of low-level operations.   A general purpose software prototype has been devised and evaluated to investigate the affordances of such a tool. This software — termed a SpaceSheet — takes the form of a spreadsheet interface and enables users to explore a latent space of fonts. User testing and observation of task-based evaluations revealed that the tool enabled a novel top-down approach to design experimentation. This mode of working required a new set of skills for users to derive meaning and navigate within the model effectively. Despite this, a rudimentary understanding was observed to be sufficient to enable designers and non-designers alike to explore design possibilities more effectively.</p>


Author(s):  
Vipin Bondre ◽  
Amoli Belsare

Automated detection and segmentation of cell nuclei is an essential step in breast cancer histopathology, so that there is improved accuracy, speed, level of automation and adaptability to new application. The goal of this paper is to develop efficient and accurate algorithms for detecting and segmenting cell nuclei in 2-D histological images. In this paper we will implement the utility of our nuclear segmentation algorithm in accurate extraction of nuclear features for automated grading of (a) breast cancer, and (b) distinguishing between cancerous and benign breast histology specimens. In order to address the issue the scheme integrates image information across three different scales: (1) low level information based on pixel values, (2) high-level information based on relationships between pixels for object detection, and(3)domain-specific information based on relationships between histological structures. Low-level information is utilized by a Bayesian Classifier to generate likelihood that each pixel belongs to an object of interest. High-level information is extracted in two ways: (i) by a level-set algorithm, where a contour is evolved in the likelihood scenes generated by the Bayesian classifier to identify object boundaries, and (ii) by a template matching algorithm, where shape models are used to identify glands and nuclei from the low-level likelihood scenes. Structural constraints are imposed via domain specific knowledge in order to verify whether the detected objects do indeed belong to structures of interest. The efficiency of our segmentation algorithm is evaluated by comparing breast cancer grading and benign vs. cancer discrimination accuracies with corresponding accuracies obtained via manual detection and segmentation of glands and nuclei.


2021 ◽  
Author(s):  
◽  
Bryan Loh

<p>Computational design tools enable designers to construct and manipulate representations of design artifacts to arrive at a solution. However, the constraints of deterministic programming impose a high cost of tedium and inflexibility to exploring design alternatives through these models. They require designers to express high-level design intent through sequences of low-level operations. Generative neural networks are able to construct generalised models of images which capture principles implicit within them. The latent spaces of these models can be sampled to create novel images and to perform semantic operations. This presents the opportunity for more meaningful and efficient design experimentation, where designers are able to express design intent through principles inferred by the model, instead of sequences of low-level operations.   A general purpose software prototype has been devised and evaluated to investigate the affordances of such a tool. This software — termed a SpaceSheet — takes the form of a spreadsheet interface and enables users to explore a latent space of fonts. User testing and observation of task-based evaluations revealed that the tool enabled a novel top-down approach to design experimentation. This mode of working required a new set of skills for users to derive meaning and navigate within the model effectively. Despite this, a rudimentary understanding was observed to be sufficient to enable designers and non-designers alike to explore design possibilities more effectively.</p>


2019 ◽  
Vol 1 (1) ◽  
pp. 31-39
Author(s):  
Ilham Safitra Damanik ◽  
Sundari Retno Andani ◽  
Dedi Sehendro

Milk is an important intake to meet nutritional needs. Both consumed by children, and adults. Indonesia has many producers of fresh milk, but it is not sufficient for national milk needs. Data mining is a science in the field of computers that is widely used in research. one of the data mining techniques is Clustering. Clustering is a method by grouping data. The Clustering method will be more optimal if you use a lot of data. Data to be used are provincial data in Indonesia from 2000 to 2017 obtained from the Central Statistics Agency. The results of this study are in Clusters based on 2 milk-producing groups, namely high-dairy producers and low-milk producing regions. From 27 data on fresh milk production in Indonesia, two high-level provinces can be obtained, namely: West Java and East Java. And 25 others were added in 7 provinces which did not follow the calculation of the K-Means Clustering Algorithm, including in the low level cluster.


Author(s):  
Margarita Khomyakova

The author analyzes definitions of the concepts of determinants of crime given by various scientists and offers her definition. In this study, determinants of crime are understood as a set of its causes, the circumstances that contribute committing them, as well as the dynamics of crime. It is noted that the Russian legislator in Article 244 of the Criminal Code defines the object of this criminal assault as public morality. Despite the use of evaluative concepts both in the disposition of this norm and in determining the specific object of a given crime, the position of criminologists is unequivocal: crimes of this kind are immoral and are in irreconcilable conflict with generally accepted moral and legal norms. In the paper, some views are considered with regard to making value judgments which could hardly apply to legal norms. According to the author, the reasons for abuse of the bodies of the dead include economic problems of the subject of a crime, a low level of culture and legal awareness; this list is not exhaustive. The main circumstances that contribute committing abuse of the bodies of the dead and their burial places are the following: low income and unemployment, low level of criminological prevention, poor maintenance and protection of medical institutions and cemeteries due to underperformance of state and municipal bodies. The list of circumstances is also open-ended. Due to some factors, including a high level of latency, it is not possible to reflect the dynamics of such crimes objectively. At the same time, identification of the determinants of abuse of the bodies of the dead will reduce the number of such crimes.


2021 ◽  
pp. 002224372199837
Author(s):  
Walter Herzog ◽  
Johannes D. Hattula ◽  
Darren W. Dahl

This research explores how marketing managers can avoid the so-called false consensus effect—the egocentric tendency to project personal preferences onto consumers. Two pilot studies were conducted to provide evidence for the managerial importance of this research question and to explore how marketing managers attempt to avoid false consensus effects in practice. The results suggest that the debiasing tactic most frequently used by marketers is to suppress their personal preferences when predicting consumer preferences. Four subsequent studies show that, ironically, this debiasing tactic can backfire and increase managers’ susceptibility to the false consensus effect. Specifically, the results suggest that these backfire effects are most likely to occur for managers with a low level of preference certainty. In contrast, the results imply that preference suppression does not backfire but instead decreases false consensus effects for managers with a high level of preference certainty. Finally, the studies explore the mechanism behind these results and show how managers can ultimately avoid false consensus effects—regardless of their level of preference certainty and without risking backfire effects.


Sign in / Sign up

Export Citation Format

Share Document