scholarly journals Embarrassingly Parallel Search in Constraint Programming

2016 ◽  
Vol 57 ◽  
pp. 421-464 ◽  
Author(s):  
Arnaud Malapert ◽  
Jean-Charles Régin ◽  
Mohamed Rezgui

We introduce an Embarrassingly Parallel Search (EPS) method for solving constraint problems in parallel, and we show that this method matches or even outperforms state-of-the-art algorithms on a number of problems using various computing infrastructures. EPS is a simple method in which a master decomposes the problem into many disjoint subproblems which are then solved independently by workers. Our approach has three advantages: it is an efficient method; it involves almost no communication or synchronization between workers; and its implementation is made easy because the master and the workers rely on an underlying constraint solver, but does not require to modify it. This paper describes the method, and its applications to various constraint problems (satisfaction, enumeration, optimization). We show that our method can be adapted to different underlying solvers (Gecode, Choco2, OR-tools) on different computing infrastructures (multi-core, data centers, cloud computing). The experiments cover unsatisfiable, enumeration and optimization problems, but do not cover first solution search because it makes the results hard to analyze. The same variability can be observed for optimization problems, but at a lesser extent because the optimality proof is required. EPS offers good average performance, and matches or outperforms other available parallel implementations of Gecode as well as some solvers portfolios. Moreover, we perform an in-depth analysis of the various factors that make this approach efficient as well as the anomalies that can occur. Last, we show that the decomposition is a key component for efficiency and load balancing.

Author(s):  
Federico Larumbe ◽  
Brunilde Sansò

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


2021 ◽  
Vol 26 (4) ◽  
Author(s):  
Mazen Mohamad ◽  
Jan-Philipp Steghöfer ◽  
Riccardo Scandariato

AbstractSecurity Assurance Cases (SAC) are a form of structured argumentation used to reason about the security properties of a system. After the successful adoption of assurance cases for safety, SAC are getting significant traction in recent years, especially in safety-critical industries (e.g., automotive), where there is an increasing pressure to be compliant with several security standards and regulations. Accordingly, research in the field of SAC has flourished in the past decade, with different approaches being investigated. In an effort to systematize this active field of research, we conducted a systematic literature review (SLR) of the existing academic studies on SAC. Our review resulted in an in-depth analysis and comparison of 51 papers. Our results indicate that, while there are numerous papers discussing the importance of SAC and their usage scenarios, the literature is still immature with respect to concrete support for practitioners on how to build and maintain a SAC. More importantly, even though some methodologies are available, their validation and tool support is still lacking.


2021 ◽  
Vol 1 (2) ◽  
pp. 1-23
Author(s):  
Arkadiy Dushatskiy ◽  
Tanja Alderliesten ◽  
Peter A. N. Bosman

Surrogate-assisted evolutionary algorithms have the potential to be of high value for real-world optimization problems when fitness evaluations are expensive, limiting the number of evaluations that can be performed. In this article, we consider the domain of pseudo-Boolean functions in a black-box setting. Moreover, instead of using a surrogate model as an approximation of a fitness function, we propose to precisely learn the coefficients of the Walsh decomposition of a fitness function and use the Walsh decomposition as a surrogate. If the coefficients are learned correctly, then the Walsh decomposition values perfectly match with the fitness function, and, thus, the optimal solution to the problem can be found by optimizing the surrogate without any additional evaluations of the original fitness function. It is known that the Walsh coefficients can be efficiently learned for pseudo-Boolean functions with k -bounded epistasis and known problem structure. We propose to learn dependencies between variables first and, therefore, substantially reduce the number of Walsh coefficients to be calculated. After the accurate Walsh decomposition is obtained, the surrogate model is optimized using GOMEA, which is considered to be a state-of-the-art binary optimization algorithm. We compare the proposed approach with standard GOMEA and two other Walsh decomposition-based algorithms. The benchmark functions in the experiments are well-known trap functions, NK-landscapes, MaxCut, and MAX3SAT problems. The experimental results demonstrate that the proposed approach is scalable at the supposed complexity of O (ℓ log ℓ) function evaluations when the number of subfunctions is O (ℓ) and all subfunctions are k -bounded, outperforming all considered algorithms.


2004 ◽  
Vol 1 (1) ◽  
pp. 131-142
Author(s):  
Ljupčo Todorovski ◽  
Sašo Džeroski ◽  
Peter Ljubič

Both equation discovery and regression methods aim at inducing models of numerical data. While the equation discovery methods are usually evaluated in terms of comprehensibility of the induced model, the emphasis of the regression methods evaluation is on their predictive accuracy. In this paper, we present Ciper, an efficient method for discovery of polynomial equations and empirically evaluate its predictive performance on standard regression tasks. The evaluation shows that polynomials compare favorably to linear and piecewise regression models, induced by the existing state-of-the-art regression methods, in terms of degree of fit and complexity.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Dengdeng Wanyan ◽  
Tong Shang

Purpose The purpose of this paper is to investigate the significant advantages of cloud technology in digital cultural heritage construction by analyzing public culture cloud platforms in China. The authors hope to provide references for other countries and regions on the applications of cloud computing techniques in digital cultural construction. Design/methodology/approach The primary research methods involved interview and case analysis. A comprehensive understanding of cloud technology and China’s culture cloud platforms were gained through research into extensive amounts of literature. Analyzing 21 culture cloud platforms offers a general understanding of culture clouds, while the Hunan Public Culture Cloud acts as a representative sample that gives detailed insight. Findings This paper explores the considerable advantages of cloud computing in digital cultural construction from four aspects: integration of decentralized heterogeneous resources, coordination and cooperation, accurately matching user needs and promotion of balanced service development. Originality/value Existing studies fall short of comprehensive investigations of culture cloud platforms and in-depth analysis of the advantages of cloud technology applications. This paper uses the construction of public culture cloud platforms in China as the research object. Further, this paper compares the construction status of different culture cloud platforms.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


Sign in / Sign up

Export Citation Format

Share Document