scholarly journals A Flexible Proof Format for SAT Solver-Elaborator Communication

Author(s):  
Seulkee Baek ◽  
Mario Carneiro ◽  
Marijn J. H. Heule

AbstractWe introduce , a new proof format for unsatisfiable SAT problems, and its associated toolchain. Compared to , the format allows solvers to include more information in proofs to reduce the computational cost of subsequent elaboration to . The format is easy to parse forward and backward, and it is extensible to future proof methods. The provision of optional proof steps allows SAT solver developers to balance implementation effort against elaboration time, with little to no overhead on solver time. We benchmark our toolchain against a comparable toolchain and confirm >84% median reduction in elaboration time and >94% median decrease in peak memory usage.

2016 ◽  
Vol 2016 ◽  
pp. 1-12
Author(s):  
Hua Hua ◽  
Xiaomin Yang ◽  
Binyu Yan ◽  
Kai Zhou ◽  
Wei Lu

Main challenges for image enlargement methods in embedded systems come from the requirements of good performance, low computational cost, and low memory usage. This paper proposes an efficient image enlargement method which can meet these requirements in embedded system. Firstly, to improve the performance of enlargement methods, this method extracts different kind of features for different morphologies with different approaches. Then, various dictionaries based on different kind of features are learned, which represent the image in a more efficient manner. Secondly, to accelerate the enlargement speed and reduce the memory usage, this method divides the atoms of each dictionary into several clusters. For each cluster, separate projection matrix is calculated. This method reformulates the problem as a least squares regression. The high-resolution (HR) images can be reconstructed based on a few projection matrixes. Numerous experiment results show that this method has advantages such as being efficient and real-time and having less memory cost. These advantages make this method easy to implement in mobile embedded system.


Author(s):  
Arturo Mascorro ◽  
Francisco Mesa ◽  
Jose Alvarez ◽  
Laura Cruz

ABSTRACTA computational cost comparative study through both Java and C applications was developed. Computational routines consist of a matrix multiplications, the discrete cosine transform and the bubble-sorting algorithm. Memory and Runtime for each application were measure. It was determined that the runtime of matrix multiplication in Java was within the limits of 200 and 300 milliseconds, as opposed to the application developed in C, which shown to be stable with an execution period less than 20 milliseconds. In the ordering algorithm with the bubble method, it was observe that the Java language show be very slow compared to C. In addition, the memory usage was lower in most of the applications, showing a minimum difference. Applications were tested in both, a mobile LG-E510f and a Laptop Toshiba Satellite. The study allowed to report the profit generated in both runtime and memory consumption when performing a native implementation in C and Java.


SPE Journal ◽  
2017 ◽  
Vol 22 (06) ◽  
pp. 1999-2011 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Paul van Hagen ◽  
Jeroen C. Vink ◽  
Terence Wells

Summary Solving the Gauss-Newton trust-region subproblem (TRS) with traditional solvers involves solving a symmetric linear system with dimensions the same as the number of uncertain parameters, and it is extremely computational expensive for history-matching problems with a large number of uncertain parameters. A new trust-region (TR) solver is developed to save both memory usage and computational cost, and its performance is compared with the well-known direct TR solver using factorization and iterative TR solver using the conjugate-gradient approach. With application of the matrix inverse lemma, the original TRS is transformed to a new problem that involves solving a linear system with the number of observed data. For history-matching problems in which the number of uncertain parameters is much larger than the number of observed data, both memory usage and central-processing-unit (CPU) time can be significantly reduced compared with solving the original problem directly. An auto-adaptive power-law transformation technique is developed to transform the original strong nonlinear function to a new function that behaves more like a linear function. Finally, the Newton-Raphson method with some modifications is applied to solve the TRS. The proposed approach is applied to find best-match solutions in Bayesian-style assisted-history-matching (AHM) problems. It is first validated on a set of synthetic test problems with different numbers of uncertain parameters and different numbers of observed data. In terms of efficiency, the new approach is shown to significantly reduce both the computational cost and memory usage compared with the direct TR solver of the GALAHAD optimization library (see http://www.galahad.rl.ac.uk/doc.html). In terms of robustness, the new approach is able to reduce the risk of failure to find the correct solution significantly, compared with the iterative TR solver of the GALAHAD optimization library. Our numerical results indicate that the new solver can solve large-scale TRSs with reasonably small amounts of CPU time (in seconds) and memory (in MB). Compared with the CPU time and memory used for completing one reservoir simulation run for the same problem (in hours and in GB), the cost for finding the best-match parameter values using our new TR solver is negligible. The proposed approach has been implemented in our in-house reservoir simulation and history-matching system, and has been validated on a real-reservoir-simulation model. This illustrates the main result of this paper: the development of a robust Gauss-Newton TR approach, which is applicable for large-scale history-matching problems with negligible extra cost in CPU and memory.


10.29007/jhd7 ◽  
2018 ◽  
Author(s):  
Armin Biere

One of the design principles of the state-of-the-art SAT solver Lingeling is to use as compact data structures as possible. These reduce memory usage, increase cache efficiency and thus improve run-time, particularly, when using multiple solver instances on multi-core machines, as in our parallel portfolio solver Plingeling and our cube and conquer solver Treengeling. The scheduler of a dozen inprocessing algorithms is an important aspect of Lingeling as well. In this talk we explain these design and implementation aspects of Lingeling and discuss new direction of solver design.


2012 ◽  
Author(s):  
Todd Wareham ◽  
Robert Robere ◽  
Iris van Rooij
Keyword(s):  

2020 ◽  
Vol 2020 (14) ◽  
pp. 378-1-378-7
Author(s):  
Tyler Nuanes ◽  
Matt Elsey ◽  
Radek Grzeszczuk ◽  
John Paul Shen

We present a high-quality sky segmentation model for depth refinement and investigate residual architecture performance to inform optimally shrinking the network. We describe a model that runs in near real-time on mobile device, present a new, highquality dataset, and detail a unique weighing to trade off false positives and false negatives in binary classifiers. We show how the optimizations improve bokeh rendering by correcting stereo depth misprediction in sky regions. We detail techniques used to preserve edges, reject false positives, and ensure generalization to the diversity of sky scenes. Finally, we present a compact model and compare performance of four popular residual architectures (ShuffleNet, MobileNetV2, Resnet-101, and Resnet-34-like) at constant computational cost.


2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


2020 ◽  
Author(s):  
Florencia Klein ◽  
Daniela Cáceres-Rojas ◽  
Monica Carrasco ◽  
Juan Carlos Tapia ◽  
Julio Caballero ◽  
...  

<p>Although molecular dynamics simulations allow for the study of interactions among virtually all biomolecular entities, metal ions still pose significant challenges to achieve an accurate structural and dynamical description of many biological assemblies. This is particularly the case for coarse-grained (CG) models. Although the reduced computational cost of CG methods often makes them the technique of choice for the study of large biomolecular systems, the parameterization of metal ions is still very crude or simply not available for the vast majority of CG- force fields. Here, we show that incorporating statistical data retrieved from the Protein Data Bank (PDB) to set specific Lennard-Jones interactions can produce structurally accurate CG molecular dynamics simulations. Using this simple approach, we provide a set of interaction parameters for Calcium, Magnesium, and Zinc ions, which cover more than 80% of the metal-bound structures reported on the PDB. Simulations performed using the SIRAH force field on several proteins and DNA systems show that using the present approach it is possible to obtain non-bonded interaction parameters that obviate the use of topological constraints. </p>


2020 ◽  
Author(s):  
Shi Jun Ang ◽  
Wujie Wang ◽  
Daniel Schwalbe-Koda ◽  
Simon Axelrod ◽  
Rafael Gomez-Bombarelli

<div>Modeling dynamical effects in chemical reactions, such as post-transition state bifurcation, requires <i>ab initio</i> molecular dynamics simulations due to the breakdown of simpler static models like transition state theory. However, these simulations tend to be restricted to lower-accuracy electronic structure methods and scarce sampling because of their high computational cost. Here, we report the use of statistical learning to accelerate reactive molecular dynamics simulations by combining high-throughput ab initio calculations, graph-convolution interatomic potentials and active learning. This pipeline was demonstrated on an ambimodal trispericyclic reaction involving 8,8-dicyanoheptafulvene and 6,6-dimethylfulvene. With a dataset size of approximately</div><div>31,000 M062X/def2-SVP quantum mechanical calculations, the computational cost of exploring the reactive potential energy surface was reduced by an order of magnitude. Thousands of virtually costless picosecond-long reactive trajectories suggest that post-transition state bifurcation plays a minor role for the reaction in vacuum. Furthermore, a transfer-learning strategy effectively upgraded the potential energy surface to higher</div><div>levels of theory ((SMD-)M06-2X/def2-TZVPD in vacuum and three other solvents, as well as the more accurate DLPNO-DSD-PBEP86 D3BJ/def2-TZVPD) using about 10% additional calculations for each surface. Since the larger basis set and the dynamic correlation capture intramolecular non-covalent interactions more accurately, they uncover longer lifetimes for the charge-separated intermediate on the more accurate potential energy surfaces. The character of the intermediate switches from entropic to thermodynamic upon including implicit solvation effects, with lifetimes increasing with solvent polarity. Analysis of 2,000 reactive trajectories on the chloroform PES shows a qualitative agreement with the experimentally-reported periselectivity for this reaction. This overall approach is broadly applicable and opens a door to the study of dynamical effects in larger, previously-intractable reactive systems.</div>


Author(s):  
Yudong Qiu ◽  
Daniel Smith ◽  
Chaya Stern ◽  
mudong feng ◽  
Lee-Ping Wang

<div>The parameterization of torsional / dihedral angle potential energy terms is a crucial part of developing molecular mechanics force fields.</div><div>Quantum mechanical (QM) methods are often used to provide samples of the potential energy surface (PES) for fitting the empirical parameters in these force field terms.</div><div>To ensure that the sampled molecular configurations are thermodynamically feasible, constrained QM geometry optimizations are typically carried out, which relax the orthogonal degrees of freedom while fixing the target torsion angle(s) on a grid of values.</div><div>However, the quality of results and computational cost are affected by various factors on a non-trivial PES, such as dependence on the chosen scan direction and the lack of efficient approaches to integrate results started from multiple initial guesses.</div><div>In this paper we propose a systematic and versatile workflow called \textit{TorsionDrive} to generate energy-minimized structures on a grid of torsion constraints by means of a recursive wavefront propagation algorithm, which resolves the deficiencies of conventional scanning approaches and generates higher quality QM data for force field development.</div><div>The capabilities of our method are presented for multi-dimensional scans and multiple initial guess structures, and an integration with the MolSSI QCArchive distributed computing ecosystem is described.</div><div>The method is implemented in an open-source software package that is compatible with many QM software packages and energy minimization codes.</div>


Sign in / Sign up

Export Citation Format

Share Document