scholarly journals OptCE: A Counterexample-Guided Inductive Optimization Solver

Author(s):  
Higo F. Albuquerque ◽  
Rodrigo F. Araújo ◽  
Iury V. Bessa ◽  
Lucas C. Cordeiro ◽  
Eddie B. de Lima Filho
Keyword(s):  
2020 ◽  
Vol 34 (02) ◽  
pp. 1644-1651
Author(s):  
Yuki Satake ◽  
Hiroshi Unno ◽  
Hinata Yanagi

In this paper, we present a novel constraint solving method for a class of predicate Constraint Satisfaction Problems (pCSP) where each constraint is represented by an arbitrary clause of first-order predicate logic over predicate variables. The class of pCSP properly subsumes the well-studied class of Constrained Horn Clauses (CHCs) where each constraint is restricted to a Horn clause. The class of CHCs has been widely applied to verification of linear-time safety properties of programs in different paradigms. In this paper, we show that pCSP further widens the applicability to verification of branching-time safety properties of programs that exhibit finitely-branching non-determinism. Solving pCSP (and CHCs) however is challenging because the search space of solutions is often very large (or unbounded), high-dimensional, and non-smooth. To address these challenges, our method naturally combines techniques studied separately in different literatures: counterexample guided inductive synthesis (CEGIS) and probabilistic inference in graphical models. We have implemented the presented method and obtained promising results on existing benchmarks as well as new ones that are beyond the scope of existing CHC solvers.


Author(s):  
Riitta-Liisa Valijärvi ◽  
Eszter Tarsoly

AbstractThis article explores students’ perceptions of inductive and deductive methods of teaching reading in Finnish and Hungarian in a higher education setting. A guided inductive discovery method of reading involves independent work and minimum vocabulary and grammar explanation before the reading assignment is given. A deductive pre-taught method involves grammar, vocabulary and content explanation before a text is read. Structured focus group interviews revealed that the advantages of the discovery method, i.e. guided inductive reading, are that it helps to maintain curiosity, enhances memorisation, encourages independent and active learning, and prepares for real-life reading situations. The deductive pre-taught method, on the other hand, feels safe and helpful, can keep one’s confidence up, saves time and effort for other language-learning tasks, and ensures a correct understanding of the text. The interviewees wanted to be given information about which grammar to expect in advance, some felt the same way about vocabulary. They were not always aware of the difference between the two approaches. By using both methods the teacher can help to maintain motivation and cater for different student preferences. Mixing methods also reflects how we treat information in real life. There appears to be no ideal method in teaching L2 reading: both methods have their advantages and disadvantages from the students’ point of view. Explicit instruction is crucial for reading development either before or after a text is read.


Author(s):  
Alessandro Abate ◽  
Mirco Giacobbe ◽  
Diptarko Roy

AbstractWe present the first machine learning approach to the termination analysis of probabilistic programs. Ranking supermartingales (RSMs) prove that probabilistic programs halt, in expectation, within a finite number of steps. While previously RSMs were directly synthesised from source code, our method learns them from sampled execution traces. We introduce the neural ranking supermartingale: we let a neural network fit an RSM over execution traces and then we verify it over the source code using satisfiability modulo theories (SMT); if the latter step produces a counterexample, we generate from it new sample traces and repeat learning in a counterexample-guided inductive synthesis loop, until the SMT solver confirms the validity of the RSM. The result is thus a sound witness of probabilistic termination. Our learning strategy is agnostic to the source code and its verification counterpart supports the widest range of probabilistic single-loop programs that any existing tool can handle to date. We demonstrate the efficacy of our method over a range of benchmarks that include linear and polynomial programs with discrete, continuous, state-dependent, multi-variate, hierarchical distributions, and distributions with undefined moments.


Author(s):  
Satoshi Kura ◽  
Hiroshi Unno ◽  
Ichiro Hasuo

AbstractWe present a novel decision tree-based synthesis algorithm of ranking functions for verifying program termination. Our algorithm is integrated into the workflow of CounterExample Guided Inductive Synthesis (CEGIS). CEGIS is an iterative learning model where, at each iteration, (1) a synthesizer synthesizes a candidate solution from the current examples, and (2) a validator accepts the candidate solution if it is correct, or rejects it providing counterexamples as part of the next examples. Our main novelty is in the design of a synthesizer: building on top of a usual decision tree learning algorithm, our algorithm detects cycles in a set of example transitions and uses them for refining decision trees. We have implemented the proposed method and obtained promising experimental results on existing benchmark sets of (non-)termination verification problems that require synthesis of piecewise-defined lexicographic affine ranking functions.


2020 ◽  
Vol 64 (7) ◽  
pp. 1523-1552
Author(s):  
Daniel Neider ◽  
P. Madhusudan ◽  
Shambwaditya Saha ◽  
Pranav Garg ◽  
Daejun Park

Abstract We propose a framework for synthesizing inductive invariants for incomplete verification engines, which soundly reduce logical problems in undecidable theories to decidable theories. Our framework is based on the counterexample guided inductive synthesis principle and allows verification engines to communicate non-provability information to guide invariant synthesis. We show precisely how the verification engine can compute such non-provability information and how to build effective learning algorithms when invariants are expressed as Boolean combinations of a fixed set of predicates. Moreover, we evaluate our framework in two verification settings, one in which verification engines need to handle quantified formulas and one in which verification engines have to reason about heap properties expressed in an expressive but undecidable separation logic. Our experiments show that our invariant synthesis framework based on non-provability information can both effectively synthesize inductive invariants and adequately strengthen contracts across a large suite of programs. This work is an extended version of a conference paper titled “Invariant Synthesis for Incomplete Verification Engines”.


2020 ◽  
Vol 4 (2) ◽  
Author(s):  
Paul A. Malovrh ◽  
James F. Lee ◽  
Stephen Doherty ◽  
Alecia Nichols

The present study measured the effects of guided-inductive (GI) versus deductive computer-delivered instruction on the processing and retention of the Spanish true passive using a self-paced reading design. Fifty-four foreign language learners of Spanish participated in the study, which operationalised guided-inductive and deductive approaches using an adaptation of the PACE model and processing instruction (PI), respectively. Results revealed that each experimental group significantly improved after the pedagogical intervention, and that the GI group outperformed the PI group in terms of accuracy on an immediate post-test. Differences between the groups, however, were not durative; at the delayed post-test, each group performed the same. Additional analyses revealed that the GI group spent over twice as much time on task during instruction than the PI group, with no long-term advantages on processing, calling into question the pedagogical justification for implementing GI at a curricular level.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 207485-207498
Author(s):  
Konstantin Chukharev ◽  
Dmitrii Suvorov ◽  
Daniil Chivilikhin ◽  
Valeriy Vyatkin

Sign in / Sign up

Export Citation Format

Share Document