scholarly journals Anti-unification in Constraint Logic Programming

2019 ◽  
Vol 19 (5-6) ◽  
pp. 773-789 ◽  
Author(s):  
GONZAGUE YERNAUX ◽  
WIM VANHOOF

AbstractAnti-unification refers to the process of generalizing two (or more) goals into a single, more general, goal that captures some of the structure that is common to all initial goals. In general one is typically interested in computing what is often called a most specific generalization, that is a generalization that captures a maximal amount of shared structure. In this work we address the problem of anti-unification in CLP, where goals can be seen as unordered sets of atoms and/or constraints. We show that while the concept of a most specific generalization can easily be defined in this context, computing it becomes an NP-complete problem. We subsequently introduce a generalization algorithm that computes a well-defined abstraction whose computation can be bound to a polynomial execution time. Initial experiments show that even a naive implementation of our algorithm produces acceptable generalizations in an efficient way.

2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Yueguo Luo ◽  
Zhongyang Xiong ◽  
Guanghua Zhang

Tissue P systems are a class of computing models inspired by intercellular communication, where the rules are used in the nondeterministic maximally parallel manner. As we know, the execution time of each rule is the same in the system. However, the execution time of biochemical reactions is hard to control from a biochemical point of view. In this work, we construct a uniform and efficient solution to the SAT problem with tissue P systems in a time-free way for the first time. With the P systems constructed from the sizes of instances, the execution time of the rules has no influence on the computation results. As a result, we prove that such system is shown to be highly effective for NP-complete problem even in a time-free manner with communication rules of length at most 3.


Author(s):  
Komal . ◽  
Gaurav Goel ◽  
Milanpreet Kaur

As a platform for offering on-demand services, cloud computing has increased in relevance and appeal. It has a pay-per-use model for its services. A cloud service provider's primary goal is to efficiently use resources by reducing execution time, cost, and other factors while increasing profit. As a result, effective scheduling algorithms remain a key issue in cloud computing, and this topic is categorized as an NP-complete problem. Researchers previously proposed several optimization techniques to address the NP-complete problem, but more work is needed in this area. This paper provides an overview of strategy for successful task scheduling based on a hybrid heuristic approach for both regular and larger workloads. The previous method handles the jobs adequately, but its performance degrades as the task size becomes larger. The proposed optimum scheduling method employs two distinct techniques to select the suitable VM for the specified job. To begin, it enhances the LJFP method by employing OSIG, an upgraded version of the Genetic Algorithm, to choose solutions with improved fitness factors, crossover, and mutation operators. This selection returns the best machines, and PSO then chooses one for a specific job. The appropriate machine is chosen depending on several factors, including the expected execution time, current load, and energy usage. The proposed algorithm's performance is assessed using two distinct cloud scenarios with various VMs and tasks, and overall execution time and energy usage are calculated. The proposed algorithm outperforms existing techniques in terms of energy and average execution time usage in both scenarios.


Algorithms ◽  
2022 ◽  
Vol 15 (1) ◽  
pp. 22
Author(s):  
Virginia Niculescu ◽  
Robert Manuel Ştefănică

A general crossword grid generation is considered an NP-complete problem and theoretically it could be a good candidate to be used by cryptography algorithms. In this article, we propose a new algorithm for generating perfect crosswords grids (with no black boxes) that relies on using tries data structures, which are very important for reducing the time for finding the solutions, and offers good opportunity for parallelisation, too. The algorithm uses a special tries representation and it is very efficient, but through parallelisation the performance is improved to a level that allows the solution to be obtained extremely fast. The experiments were conducted using a dictionary of almost 700,000 words, and the solutions were obtained using the parallelised version with an execution time in the order of minutes. We demonstrate here that finding a perfect crossword grid could be solved faster than has been estimated before, if we use tries as supporting data structures together with parallelisation. Still, if the size of the dictionary is increased by a lot (e.g., considering a set of dictionaries for different languages—not only for one), or through a generalisation to a 3D space or multidimensional spaces, then the problem still could be investigated for a possible usage in cryptography.


2001 ◽  
Vol 19 (3) ◽  
pp. 209-255 ◽  
Author(s):  
Agostino Dovier ◽  
Enrico Pontelli ◽  
Gianfranco Rossi

Sign in / Sign up

Export Citation Format

Share Document