Chapter 3. Complete Algorithms

Author(s):  
Adnan Darwiche ◽  
Knot Pipatsrisawat

Complete SAT algorithms form an important part of the SAT literature. From a theoretical perspective, complete algorithms can be used as tools for studying the complexities of different proof systems. From a practical point of view, these algorithms form the basis for tackling SAT problems arising from real-world applications. The practicality of modern, complete SAT solvers undoubtedly contributes to the growing interest in the class of complete SAT algorithms. We review these algorithms in this chapter, including Davis-Putnum resolution, Stalmarck’s algorithm, symbolic SAT solving, the DPLL algorithm, and modern clause-learning SAT solvers. We also discuss the issue of certifying the answers of modern complete SAT solvers.

10.29007/tc7q ◽  
2018 ◽  
Author(s):  
Adrián Rebola-Pardo ◽  
Martin Suda

We study the semantics of propositional interference-based proof systems such as DRAT and DPR. These are characterized by modifying a CNF formula in ways that preserve satisfiability but not necessarily logical truth. We propose an extension of propositional logic called overwrite logic with a new construct which captures the meta-level reasoning behind interferences. We analyze this new logic from the point of view of expressivity and complexity, showing that while greater expressivity is achieved, the satisfiability problem for overwrite logic is essentially as hard as SAT, and can be reduced in a way that is well-behaved for modern SAT solvers. We also show that DRAT and DPR proofs can be seen as overwrite logic proofs which preserve logical truth. This much stronger invariant than the mere satisfiability preservation maintained by the traditional view gives us better understanding on these practically important proof systems. Finally, we showcase this better understanding by finding intrinsic limitations in interference-based proof systems.


2012 ◽  
Vol 22 (02) ◽  
pp. 1250024 ◽  
Author(s):  
HONGCHUN WANG ◽  
KEQING HE ◽  
BING LI ◽  
JINHU LÜ

Complex software networks, as a typical kind of man-made complex networks, have attracted more and more attention from various fields of science and engineering over the past ten years. With the dramatic increase of scale and complexity of software systems, it is essential to develop a systematic approach to further investigate the complex software systems by using the theories and methods of complex networks and complex adaptive systems. This paper attempts to briefly review some recent advances in complex software networks and also develop some novel tools to further analyze complex software networks, including modeling, analysis, evolution, measurement, and some potential real-world applications. More precisely, this paper first describes some effective modeling approaches for characterizing various complex software systems. Based on the above theoretical and practical models, this paper introduces some recent advances in analyzing the static and dynamical behaviors of complex software networks. It is then followed by some further discussions on potential real-world applications of complex software networks. Finally, this paper outlooks some future research topics from an engineering point of view.


Author(s):  
Chunsheng Yang ◽  
Yanni Zou ◽  
Jie Liu ◽  
Kyle R Mulligan

In the past decades, machine learning techniques or algorithms, particularly, classifiers have been widely applied to various real-world applications such as PHM. In developing high-performance classifiers, or machine learning-based models, i.e. predictive model for PHM, the predictive model evaluation remains a challenge. Generic methods such as accuracy may not fully meet the needs of models evaluation for prognostic applications. This paper addresses this issue from the point of view of PHM systems. Generic methods are first reviewed while outlining their limitations or deficiencies with respect to PHM. Then, two approaches developed for evaluating predictive models are presented with emphasis on specificities and requirements of PHM. A case of real prognostic application is studies to demonstrate the usefulness of two proposed methods for predictive model evaluation. We argue that predictive models for PHM must be evaluated not only using generic methods, but also domain-oriented approaches in order to deploy the models in real-world applications.


Author(s):  
Dianhuan Lin ◽  
Jianzhong Chen ◽  
Hiroaki Watanabe ◽  
Stephen H. Muggleton ◽  
Pooja Jain ◽  
...  

1994 ◽  
Vol 6 (2) ◽  
pp. 150-154
Author(s):  
Shigeki Abe ◽  
◽  
Michitaka Kameyama ◽  
Tatsuo Higuchi ◽  
◽  
...  

To achieve the safety of an intelligent digital system for real-world applications, not only the hardware faults in the processors but also any other faults and errors related to the real world such as sensor faults, actuator faults and human errors must be removed. From this point of view, an intelligent fault-tolerant system for real-world applications is proposed based on triple-modular redundancy. The system consists of a master processor that performs the actual control operations and two redundant processors which simulate real-world process together with the control operations using knowledge-based inference strategy. To realize the independency between the triplicated modules, the simulation for error detection and recovery is performed without actual external sensor signals used in the master processor.


Sensor Review ◽  
2016 ◽  
Vol 36 (3) ◽  
pp. 277-286 ◽  
Author(s):  
Wenhao Zhang ◽  
Melvyn Lionel Smith ◽  
Lyndon Neal Smith ◽  
Abdul Rehman Farooq

Purpose This paper aims to introduce an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of the algorithm aims at excellent accuracy, robustness and real-time performance for use in real-world applications. Design/methodology/approach A modular approach has been designed that makes use of isophote and gradient features to estimate eye centre locations. This approach embraces two main modalities that progressively reduce global facial features to local levels for more precise inspections. A novel selective oriented gradient (SOG) filter has been specifically designed to remove strong gradients from eyebrows, eye corners and self-shadows, which sabotage most eye centre localisation methods. The proposed algorithm, tested on the BioID database, has shown superior accuracy. Findings The eye centre localisation algorithm has been compared with 11 other methods on the BioID database and six other methods on the GI4E database. The proposed algorithm has outperformed all the other algorithms in comparison in terms of localisation accuracy while exhibiting excellent real-time performance. This method is also inherently robust against head poses, partial eye occlusions and shadows. Originality/value The eye centre localisation method uses two mutually complementary modalities as a novel, fast, accurate and robust approach. In addition, other than assisting eye centre localisation, the SOG filter is able to resolve general tasks regarding the detection of curved shapes. From an applied point of view, the proposed method has great potentials in benefiting a wide range of real-world human-computer interaction (HCI) applications.


1998 ◽  
Vol 4 (3) ◽  
pp. 237-257 ◽  
Author(s):  
Moshe Sipper

The study of artificial self-replicating structures or machines has been taking place now for almost half a century. My goal in this article is to present an overview of research carried out in the domain of self-replication over the past 50 years, starting from von Neumann's work in the late 1940s and continuing to the most recent research efforts. I shall concentrate on computational models, that is, ones that have been studied from a computer science point of view, be it theoretical or experimental. The systems are divided into four major classes, according to the model on which they are based: cellular automata, computer programs, strings (or strands), or an altogether different approach. With the advent of new materials, such as synthetic molecules and nanomachines, it is quite possible that we shall see this somewhat theoretical domain of study producing practical, real-world applications.


Author(s):  
Kuan-Lun Hsu ◽  
Kwun-Lon Ting

Abstract This paper presents a family of over-constrained mechanisms with revolute and prismatic joints. They are constructed by concatenating a Bennett 4R and a spatial RPRP mechanism. This is a major breakthrough because an assembly of two different source-modules, for the first time, will be used in the modular construction. A Bennett 4R mechanism and a spatial RPRP mechanism are mated for the purpose of demonstration. Topological reconfigurations of synthesized mechanisms are also discussed. The results indicate that synthesized mechanisms can be topologically reconfigured with either a plane-symmetric structure or a spatial four-bar RCRC loop. These synthesized mechanisms along with their reconfigurations represent the first and unique contribution in theoretical and applied kinematics. Academically, proposed methodology can be used to synthesize several families of over-constrained mechanisms. Each family of new mechanisms is unique and has its own academic significance because they are theoretical exceptions outside Chebychev–Grübler–Kutzbach criterion. The geometrical principles that address the combination of hybrid loops can treat the topological synthesis of over-constrained mechanisms as a systematic approach instead of a random search. Industrially, such paradoxical mechanisms could also be potentially valuable. The ambiguity of their structural synthesis stops ones from being aware of these theoretical exceptions. Hence, people fail to implement these mechanisms into real-world applications. The findings of this research can help people sufficiently acquire the knowledge of how to configure such mechanisms with desired mobility. From a practical point of view, over-constrained mechanisms can transmit motions with less number of links than the general types need. This means that engineers could achieve a compact design with fewer components. These features could be an attractive advantage to real world applications.


2020 ◽  
Vol 34 (02) ◽  
pp. 1428-1435
Author(s):  
Md Solimul Chowdhury ◽  
Martin Müller ◽  
Jia You

The efficiency of Conflict Driven Clause Learning (CDCL) SAT solving depends crucially on finding conflicts at a fast rate. State-of-the-art CDCL branching heuristics such as VSIDS, CHB and LRB conform to this goal. We take a closer look at the way in which conflicts are generated over the course of a CDCL SAT search. Our study of the VSIDS branching heuristic shows that conflicts are typically generated in short bursts, followed by what we call a conflict depression phase in which the search fails to generate any conflicts in a span of decisions. The lack of conflict indicates that the variables that are currently ranked highest by the branching heuristic fail to generate conflicts. Based on this analysis, we propose an exploration strategy, called expSAT, which randomly samples variable selection sequences in order to learn an updated heuristic from the generated conflicts. The goal is to escape from conflict depressions expeditiously. The branching heuristic deployed in expSAT combines these updates with the standard VSIDS activity scores. An extensive empirical evaluation with four state-of-the-art CDCL SAT solvers demonstrates good-to-strong performance gains with the expSAT approach.


Sign in / Sign up

Export Citation Format

Share Document