scholarly journals Predicting SAT Solver Performance on Heterogeneous Hardware

10.29007/8m31 ◽  
2019 ◽  
Author(s):  
Zack Newsham ◽  
Vijay Ganesh ◽  
Sebastian Fischmeister

In recent years, a lot of effort has been expended in determining if SAT solver performance is predictable. However, the work in this area invariably focuses on individual machines, and often on individual solvers. It is unclear whether predictions made on a specific solver and machine are accurate when translated to other solvers and hardware. In this work we consider five state-of-the-art solvers, 26 machines and 143 feature instances selected from the 2011 to 2014 SAT competitions. Using combinations of solvers, machines and instances we present four results: First, we show that UNSAT instances are more predictable than corresponding SAT instances. Second, we show that the number of cores in a machine has more impact on performance than L2 cache size. Third, we show that instances with fewer reused clauses are more CPU bound than those where clause reuse is high. Finally, we make accurate predictions of solution time for each of the instances considered across a diverse set of machines.


10.29007/hvqt ◽  
2018 ◽  
Author(s):  
Gilles Audemard ◽  
Benoît Hoessen ◽  
Saïd Jabbour ◽  
Cédric Piette

Over the years, parallel SAT solving becomes more and more important. However, most of state-of-the-art parallel SAT solvers are portfolio-based ones. They aim at running several times the same solver with different parameters. In this paper, we propose a tool called Dolius, mainly based on the divide and conquer paradigm. In contrast to most current parallel efficient engines, Dolius does not need shared memory, can be distributed, and scales well when a large number of computing units is available. Furthermore, our tool contains an API allowing to plug any SAT solver in a simple way.



Author(s):  
Ram Prasad Mohanty ◽  
Ashok Kumar Turuk ◽  
Bibhudatta Sahoo

The growing number of cores increases the demand for a powerful memory subsystem which leads to enhancement in the size of caches in multicore processors. Caches are responsible for giving processing elements a faster, higher bandwidth local memory to work with. In this chapter, an attempt has been made to analyze the impact of cache size on performance of Multi-core processors by varying L1 and L2 cache size on the multicore processor with internal network (MPIN) referenced from NIAGRA architecture. As the number of core's increases, traditional on-chip interconnects like bus and crossbar proves to be low in efficiency as well as suffer from poor scalability. In order to overcome the scalability and efficiency issues in these conventional interconnect, ring based design has been proposed. The effect of interconnect on the performance of multicore processors has been analyzed and a novel scalable on-chip interconnection mechanism (INOC) for multicore processors has been proposed. The benchmark results are presented by using a full system simulator. Results show that, using the proposed INoC, compared with the MPIN; the execution time are significantly reduced.



Author(s):  
Edward Zulkoski ◽  
Ruben Martins ◽  
Christoph M. Wintersteiger ◽  
Jia Hui Liang ◽  
Krzysztof Czarnecki ◽  
...  


Author(s):  
Gregor Behnke ◽  
Daniel Höller ◽  
Susanne Biundo

HTN planning provides an expressive formalism to model complex application domains. It has been widely used in realworld applications. However, the development of domainindependent planning techniques for such models is still lacking behind. The need to be informed about both statetransitions and the task hierarchy makes the realisation of search-based approaches difficult, especially with unrestricted partial ordering of tasks in HTN domains. Recently, a translation of HTN planning problems into propositional logic has shown promising empirical results. Such planners benefit from a unified representation of state and hierarchy, but until now require very large formulae to represent partial order. In this paper, we introduce a novel encoding of HTN Planning as SAT. In contrast to related work, most of the reasoning on ordering relations is not left to the SAT solver, but done beforehand. This results in much smaller formulae and, as shown in our evaluation, in a planner that outperforms previous SAT-based approaches as well as the state-of-the-art in search-based HTN planning.



2020 ◽  
Vol 14 (4) ◽  
pp. 1-21
Author(s):  
Noureddine Aribi ◽  
Yahia Lebbah

Cryptographic protocols form the backbone of digital society. They are concurrent multiparty communication protocols that use cryptography to achieve security goals such as confidentiality, authenticity, integrity, etc., in the presence of adversaries. Unfortunately, protocol verification still represents a critical task and a major cost to engineer attack-free security protocols. Model checking and SAT-based techniques proved quite effective in this context. This article proposes an efficient automatic model checking approach that exemplifies a security property violation. In this approach, a protocol verification is abstracted as a compact planning problem, which is efficiently solved by a state-of-the-art SAT solver. The experiments performed on some real-world cryptographic protocols succeeded in detecting new logical attacks, violating some security properties. Those attacks encompass both “type flaw” and “replay” attacks, which are difficult to tackle with the existing planning-based approaches.



AI Magazine ◽  
2012 ◽  
Vol 33 (1) ◽  
pp. 89-92 ◽  
Author(s):  
Matti Järvisalo ◽  
Daniel Le Berre ◽  
Olivier Roussel ◽  
Laurent Simon

The International SAT Solver Competition is today an established series of competitive events aiming at objectively evaluating the progress in state-of-the-art procedures for solving Boolean satisfiability (SAT) instances. Over the years, the competitions have significantly contributed to the fast progress in SAT solver technology that has made SAT a practical success story of computer science. This short article provides an overview of the SAT solver competitions.



2011 ◽  
Vol 6 (1) ◽  
pp. 50-59
Author(s):  
Bernardo C. Vieira ◽  
Fabrício V. Andrade ◽  
Antônio O. Fernandes

The state-of-the-art SAT solvers usually share the same core techniques, for instance: the watched literals structure, conflict clause recording and non-chronological backtracking. Nevertheless, they might differ in the elimination of learnt clauses, as well as in the decision heuristic. This article presents a framework for generating configurable SAT solvers. The proposed framework is composed of the following components: a Base SAT Solver, a Perl Preprocessor, XML files (Solver Description and Heuristics Description files) to describe each heuristic as well as the set of heuristics that the generated solver uses. This solvers may use several techniques and heuristics such as those implemented in BerkMin, and in Equivalence Checking of Dissimilar Circuits, and also in Minisat. In order to demonstrate the effectiveness of the proposed framework, this article also presents three distinct SAT solver instances generated by the framework to address a complex and challenging industry problem: the Combinational Equivalence Checking problem (CEC).The first instance is a SAT solver that uses BerkMin and Dissimilar Circuits core techniques except the learnt clause elimination heuristic that has been adapted from Minisat; the second is another solver that combines BerkMin and Minisat decision heuristics at run-time; and the third is yet another SAT solver that changes the database reducing heuristic at run-time. The experiments demonstrate that the first SAT solver generated is a faster solver than state-of-the-art SAT solver BerkMin for several instances as well as for Minisat in almost every instance.



10.29007/5l47 ◽  
2018 ◽  
Author(s):  
Armin Biere ◽  
Ioan Dragan ◽  
Laura Kovács ◽  
Andrei Voronkov

In order to better understand how well a state of the art SAT solver would behave in the framework of a first-order automated theorem prover we have decided to integrate Lingeling, best performing SAT solver, inside Vampire’s AVATAR framework. In this paper we propose two ways of integrating a SAT solver inside of Vampire and evaluate overall performance of this combination. Our experiments show that by using a state of the art SAT solver in Vampire we manage to solve more problems. Surprisingly though, there are cases where combination of the two solvers does not always prove to generate best results.



10.29007/3vwv ◽  
2018 ◽  
Author(s):  
Norbert Manthey ◽  
Ari Saptawijaya

The paper presents our work on cache analysis of SAT-solving. The aim is to study how resources are utilized by a SAT-solver and to use this knowledge to improve the resource usage in SAT-solving. The analysis is performed mainly on our CDCL-based SAT-solver and additionally on MiniSAT and PrecoSAT. The measurement is conducted using sample-based profiling on some industrial benchmark from the SAT-competition 2009. During the measurement the following hardware events are traced: total cycles, stall cycles, L2 cache hits and L2 cache misses. From the measurement results, our runtime and implementation analysis unveil that several improvements on resource usage can be done, i.e. on data structures and memory access. These improvements bring about 60% speedup of runtime performance for our solver.



Sign in / Sign up

Export Citation Format

Share Document