problem instance
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 56)

H-INDEX

12
(FIVE YEARS 3)

2021 ◽  
Vol 8 (4) ◽  
pp. 1-19
Author(s):  
Xuejiao Kang ◽  
David F. Gleich ◽  
Ahmed Sameh ◽  
Ananth Grama

As parallel and distributed systems scale, fault tolerance is an increasingly important problem—particularly on systems with limited I/O capacity and bandwidth. Erasure coded computations address this problem by augmenting a given problem instance with redundant data and then solving the augmented problem in a fault oblivious manner in a faulty parallel environment. In the event of faults, a computationally inexpensive procedure is used to compute the true solution from a potentially fault-prone solution. These techniques are significantly more efficient than conventional solutions to the fault tolerance problem. In this article, we show how we can minimize, to optimality, the overhead associated with our problem augmentation techniques for linear system solvers. Specifically, we present a technique that adaptively augments the problem only when faults are detected. At any point in execution, we only solve a system whose size is identical to the original input system. This has several advantages in terms of maintaining the size and conditioning of the system, as well as in only adding the minimal amount of computation needed to tolerate observed faults. We present, in detail, the augmentation process, the parallel formulation, and evaluation of performance of our technique. Specifically, we show that the proposed adaptive fault tolerance mechanism has minimal overhead in terms of FLOP counts with respect to the original solver executing in a non-faulty environment, has good convergence properties, and yields excellent parallel performance. We also demonstrate that our approach significantly outperforms an optimized application-level checkpointing scheme that only checkpoints needed data structures.


2021 ◽  
Vol 68 (3) ◽  
pp. 16-40
Author(s):  
Grzegorz Koloch ◽  
Michał Lewandowski ◽  
Marcin Zientara ◽  
Grzegorz Grodecki ◽  
Piotr Matuszak ◽  
...  

We optimise a postal delivery problem with time and capacity constraints imposed on vehicles and nodes of the logistic network. Time constraints relate to the duration of routes, whereas capacity constraints concern technical characteristics of vehicles and postal operation outlets. We consider a method which can be applied to a brownfield scenario, in which capacities of outlets can be relaxed and prospective hubs identified. As a solution, we apply a genetic algorithm and test its properties both in small case studies and in a simulated problem instance of a larger (i.e. comparable with real-world instances) size. We show that the genetic operators we employ are capable of switching between solutions based on direct origin-to-destination routes and solutions based on transfer connections, depending on what is more beneficial in a given problem instance. Moreover, the algorithm correctly identifies cases in which volumes should be shipped directly, and those in which it is optimal to use transfer connections within a single problem instance, if an instance in question requires such a selection for optimality. The algorithm is thus suitable for determining hubs and satellite locations. All considerations presented in this paper are motivated by real-life problem instances experienced by the Polish Post, the largest postal service provider in Poland, in its daily plans of delivering postal packages, letters and pallets.


2021 ◽  
Vol 7 ◽  
pp. e832
Author(s):  
Barbara Pes ◽  
Giuseppina Lai

High dimensionality and class imbalance have been largely recognized as important issues in machine learning. A vast amount of literature has indeed investigated suitable approaches to address the multiple challenges that arise when dealing with high-dimensional feature spaces (where each problem instance is described by a large number of features). As well, several learning strategies have been devised to cope with the adverse effects of imbalanced class distributions, which may severely impact on the generalization ability of the induced models. Nevertheless, although both the issues have been largely studied for several years, they have mostly been addressed separately, and their combined effects are yet to be fully understood. Indeed, little research has been so far conducted to investigate which approaches might be best suited to deal with datasets that are, at the same time, high-dimensional and class-imbalanced. To make a contribution in this direction, our work presents a comparative study among different learning strategies that leverage both feature selection, to cope with high dimensionality, as well as cost-sensitive learning methods, to cope with class imbalance. Specifically, different ways of incorporating misclassification costs into the learning process have been explored. Also different feature selection heuristics have been considered, both univariate and multivariate, to comparatively evaluate their effectiveness on imbalanced data. The experiments have been conducted on three challenging benchmarks from the genomic domain, gaining interesting insight into the beneficial impact of combining feature selection and cost-sensitive learning, especially in the presence of highly skewed data distributions.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Magdalena A. K. Lang ◽  
Catherine Cleophas ◽  
Jan Fabian Ehmke

AbstractAttended home delivery requires offering narrow delivery time slots for online booking. Given a fixed fleet of delivery vehicles and uncertainty about the value of potential future customers, retailers have to decide about the offered delivery time slots for each individual order. To this end, dynamic slotting techniques compare the reward from accepting an order to the opportunity cost of not reserving the required delivery capacity for later orders. However, exactly computing this opportunity cost means solving a complex vehicle routing and scheduling problem. In this paper, we propose and evaluate several dynamic slotting approaches that rely on an anticipatory, simulation-based preparation phase ahead of the order horizon to approximate opportunity cost. Our approaches differ in their reliance on outcomes from the preparation phase (anticipation) versus decision making on request arrival (flexibility). For the preparation phase, we create anticipatory schedules by solving the Team Orienteering Problem with Multiple Time Windows. From stochastic demand streams and problem instance characteristics, we apply learning models to flexibly estimate the effort of accepting and delivering an order request. In an extensive computational study, we explore the behavior of the proposed solution approaches. Simulating scenarios of different sizes shows that all approaches require only negligible run times within the order horizon. Finally, an empirical scenario demonstrates the concept of estimating demand model parameters from sales observations and highlights the applicability of the proposed approaches in practice.


2021 ◽  
Author(s):  
Mehmet Anıl Akbay ◽  
Christian Blum

Construct, Merge, Solve & Adapt (CMSA) is a recently developed algorithm for solving combinatorial optimization problems. It combines heuristic elements, such as the probabilistic generation of solutions, with an exact solver that is iteratively applied to sub-instances of the tackled problem instance. In this paper, we present the application of CMSA to an NP-hard problem from the family of dominating set problems in undirected graphs. More specifically, the application in this paper concerns the minimum positive influence dominating set problem, which has applications in social networks. The obtained results show that CMSA outperforms the current state-of-the-art metaheuristics from the literature. Moreover, when instances of small and medium size are concerned CMSA finds many of the optimal solutions provided by CPLEX, while it clearly outperforms CPLEX in the context of the four largest, respectively more complicated, problem instances.


2021 ◽  
Vol 8 ◽  
Author(s):  
Radu Mariescu-Istodor ◽  
Pasi Fränti

The scalability of traveling salesperson problem (TSP) algorithms for handling large-scale problem instances has been an open problem for a long time. We arranged a so-called Santa Claus challenge and invited people to submit their algorithms to solve a TSP problem instance that is larger than 1 M nodes given only 1 h of computing time. In this article, we analyze the results and show which design choices are decisive in providing the best solution to the problem with the given constraints. There were three valid submissions, all based on local search, including k-opt up to k = 5. The most important design choice turned out to be the localization of the operator using a neighborhood graph. The divide-and-merge strategy suffers a 2% loss of quality. However, via parallelization, the result can be obtained within less than 2 min, which can make a key difference in real-life applications.


Author(s):  
Yu Du ◽  
Gary Kochenberger ◽  
Fred Glover ◽  
Haibo Wang ◽  
Mark Lewis ◽  
...  

Finding good solutions to clique partitioning problems remains a computational challenge. With rare exceptions, finding optimal solutions for all but small instances is not practically possible. However, choosing the most appropriate modeling structure can have a huge impact on what is practical to obtain from exact solvers within a reasonable amount of run time. Commercial solvers have improved tremendously in recent years and the combination of the right solver and the right model can significantly increase our ability to compute acceptable solutions to modest-sized problems with solvers like CPLEX, GUROBI and XPRESS. In this paper, we explore and compare the use of three commercial solvers on modest sized test problems for clique partitioning. For each problem instance, a conventional linear model from the literature and a relatively new quadratic model are compared. Extensive computational experience indicates that the quadratic model outperforms the classic linear model as problem size grows.


Author(s):  
Alice Tarzariol ◽  
Martin Gebser ◽  
Konstantin Schekotihin

Efficient omission of symmetric solution candidates is essential for combinatorial problem solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to large-scale instances or advanced problem encodings might be problematic. Moreover, the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. To overcome these limitations, we introduce a new model-oriented approach for Answer Set Programming that lifts the SBCs of small problem instances into a set of interpretable first-order constraints using the Inductive Logic Programming paradigm. Experiments demonstrate the ability of our framework to learn general constraints from instance-specific SBCs for a collection of combinatorial problems. The obtained results indicate that our approach significantly outperforms a state-of-the-art instance-specific method as well as the direct application of a solver.


2021 ◽  
pp. 1-25
Author(s):  
Tobias Glasmachers ◽  
Oswin Krause

Abstract The class of algorithms called Hessian Estimation Evolution Strategies (HE-ESs) update the covariance matrix of their sampling distribution by directly estimating the curvature of the objective function. The approach is practically efficient, as attested by respectable performance on the BBOB testbed, even on rather irregular functions. In this paper we formally prove two strong guarantees for the (1+4)-HE-ES, a minimal elitist member of the family: stability of the covariance matrix update, and as a consequence, linear convergence on all convex quadratic problems at a rate that is independent of the problem instance.


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 187
Author(s):  
Aaron Barbosa ◽  
Elijah Pelofske ◽  
Georg Hahn ◽  
Hristo N. Djidjev

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.


Sign in / Sign up

Export Citation Format

Share Document