scholarly journals Extracting minimal unsatisfiable subformulas in satisfiability modulo theories

2011 ◽  
Vol 8 (3) ◽  
pp. 693-710 ◽  
Author(s):  
Jianmin Zhang ◽  
Shengyu Shen ◽  
Jun Zhang ◽  
Weixia Xu ◽  
LI. Sikun

Explaining the causes of infeasibility of formulas has practical applications in various fields, such as formal verification and electronic design automation. A minimal unsatisfiable subformula provides a succinct explanation of infeasibility and is valuable for applications. The problem of deriving minimal unsatisfiable cores from Boolean formulas has been addressed rather frequently in recent years. However little attention has been concentrated on extraction of unsatisfiable subformulas in Satisfiability Modulo Theories(SMT). In this paper, we propose a depth-firstsearch algorithm and a breadth-first-search algorithm to compute minimal unsatisfiable cores in SMT, adopting different searching strategy. We report and analyze experimental results obtaining from a very extensive test on SMT-LIB benchmarks.

Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 211 ◽  
Author(s):  
Pierluigi Crescenzi ◽  
Clémence Magnien ◽  
Andrea Marino

The harmonic closeness centrality measure associates, to each node of a graph, the average of the inverse of its distances from all the other nodes (by assuming that unreachable nodes are at infinite distance). This notion has been adapted to temporal graphs (that is, graphs in which edges can appear and disappear during time) and in this paper we address the question of finding the top-k nodes for this metric. Computing the temporal closeness for one node can be done in O(m) time, where m is the number of temporal edges. Therefore computing exactly the closeness for all nodes, in order to find the ones with top closeness, would require O(nm) time, where n is the number of nodes. This time complexity is intractable for large temporal graphs. Instead, we show how this measure can be efficiently approximated by using a “backward” temporal breadth-first search algorithm and a classical sampling technique. Our experimental results show that the approximation is excellent for nodes with high closeness, allowing us to detect them in practice in a fraction of the time needed for computing the exact closeness of all nodes. We validate our approach with an extensive set of experiments.


2020 ◽  
Author(s):  
Zhang Jianmin ◽  
Li Tiejun ◽  
Ma Kefan

Explaining the causes of infeasibility of Boolean formulas has practical applications in various fields. A small unsatisfiable subset can provide a succinct explanation of infeasibility and is valuable for applications, such as FPGA routing. The Boolean-based FPGA detailed routing formulation expresses the routing constraints as a Boolean function which is satisfiable if and only if the layout is routable. The unsatisfiable subformulas can help the FPGA routing tool to diagnose and eliminate the causes of unroutable. For this typical application, a resolutionbased local search algorithm to extract unsatisfiable subformulas is integrated into Booleanbased FPGA routing method. The fastest algorithm of deriving minimum unsatisfiable subformulas, called the branch-and-bound algorithm, is adopted to compare with the local search algorithm. On the standard FPGA routing benchmark, the results show that the local search algorithm outperforms the branch-and-bound algorithm on runtime. It is also concluded that the unsatisfiable subformulas play a very important role in FPGA routing real applications.


Author(s):  
E. de Langre ◽  
J. L. Riverin ◽  
M. J. Pettigrew

The time dependent forces resulting from a two-phase air-water mixture flowing in an elbow and a tee are measured. Their magnitudes as well as their spectral contents are analyzed. Comparison is made with previous experimental results on similar systems. For practical applications a dimensionless form is proposed to relate the characteristics of these forces to the parameters defining the flow and the geometry of the piping.


2020 ◽  
Author(s):  
◽  
Shiying Li

Although Zernike and pseudo-Zernike moments have some advanced properties, the computation process is generally very time-consuming, which has limited their practical applications. To improve the computational efficiency of Zernike and pseudo-Zernike moments, in this research, we have explored the use of GPU to accelerate moments computation, and proposed a GPUaccelerated algorithm. The newly developed algorithm is implemented in Python and CUDA C++ with optimizations based on symmetric properties and k × k sub-region scheme. The experimental results are encouraging and have shown that our GPU-accelerated algorithm is able to compute Zernike moments up to order 700 for an image sized at 512 × 512 in 1.7 seconds and compute pseudo-Zernike moments in 3.1 seconds. We have also verified the accuracy of our GPU algorithm by performing image reconstructions from the higher orders of Zernike and pseudo-Zernike moments. For an image sized at 512 × 512, with the maximum order of 700 and k = 11, the PSNR (Peak Signal to Noise Ratio) values of its reconstructed versions from Zernike and pseudo-Zernike moments are 44.52 and 46.29 separately. We have performed image reconstructions from partial sets of Zernike and pseudo-Zernike moments with various order n and different repetition m. Experimental results of both Zernike and pseudo-Zernike moments show that the images reconstructed from the moments of lower and higher orders preserve the principle contents and details of the original image respectively, while moments of positive and negative m result in identical images. Lastly, we have proposed a set of feature vectors based on pseudo-Zernike moments for Chinese character recognition. Three different feature vectors are composed of different parts of four selected lower pseudo-Zernike moments. Experiments on a set of 6,762 Chinese characters show that this method performs well to recognize similar-shaped Chinese characters.


Recent applications of conventional iterative coordinate descent (ICD) algorithms to multislice helical CT reconstructions have shown that conventional ICD can greatly improve image quality by increasing resolution as well as reducing noise and some artifacts. However, high computational cost and long reconstruction times remain as a barrier to the use of conventional algorithm in the practical applications. Among the various iterative methods that have been studied for conventional, ICD has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a fast model-based iterative reconstruction algorithm using spatially nonhomogeneous ICD (NH-ICD) optimization. The NH-ICD algorithm speeds up convergence by focusing computation where it is most needed. The NH-ICD algorithm has a mechanism that adaptively selects voxels for update. First, a voxel selection criterion VSC determines the voxels in greatest need of update. Then a voxel selection algorithm VSA selects the order of successive voxel updates based upon the need for repeated updates of some locations, while retaining characteristics for global convergence. In order to speed up each voxel update, we also propose a fast 3-D optimization algorithm that uses a quadratic substitute function to upper bound the local 3-D objective function, so that a closed form solution can be obtained rather than using a computationally expensive line search algorithm. The experimental results show that the proposed method accelerates the reconstructions by roughly a factor of three on average for typical 3-D multislice geometries.


Author(s):  
Md. Sabir Hossain ◽  
Ahsan Sadee Tanim ◽  
Nabila Nawal ◽  
Sharmin Akter

Background: Tour recommendation and path planning are the most challenging jobs for tourists as they decide Points of Interest (POI).Objective: To reduce the physical effort of the tourists and recommend them a personalized tour is the main objective of this paper. Most of the time people had to find the places he wants to visit in a difficult way. It kills a lot of time.Methods: To cope with this situation we have used different methodology. First, a greedy algorithm is used for filtering the POIs and BFS (Breadth First Search) algorithm will find POI in terms of user interest. The maximum number of visited POI within a limited time will be considered. Then, the Dijkstra algorithm finds the shortest path from the point of departure to the end of tours.Results:  This work shows its users list of places according to the user's interest in a particular city. It also suggests them places to visit in a range from the location of the user where a user can dynamically change this range and it also suggests nearby places they may want to visit.Conclusion: This tour recommendation system provides its users with a better trip planning and thus makes their holidays enjoyable.


2021 ◽  
Author(s):  
Shaoxia Zhang ◽  
Deyu Li ◽  
Yanhui Zhai

Abstract Decision implication is an elementary representation of decision knowledge in formal concept analysis. Decision implication canonical basis (DICB), a set of decision implications with completeness and nonredundancy, is the most compact representation of decision implications. The method based on true premises (MBTP) for DICB generation is the most efficient one at present. In practical applications, however, data is always changing dynamically, and MBTP has to re-generate inefficiently the whole DICB. This paper proposes an incremental algorithm for DICB generation, which obtains a new DICB just by modifying and updating the existing one. Experimental results verify that when the samples in data are much more than condition attributes, which is actually a general case in practical applications, the incremental algorithm is significantly superior to MBTP. Furthermore, we conclude that, even for the data in which samples is less than condition attributes, when new samples are continually added into data, the incremental algorithm must be also more efficient than MBTP, because the incremental algorithm just needs to modify the existing DICB, which is only a part of work of MBTP.


Author(s):  
Giovanni Amendola ◽  
Carmine Dodaro ◽  
Marco Maratea

The issue of describing in a formal way solving algorithms in various fields such as Propositional Satisfiability (SAT), Quantified SAT, Satisfiability Modulo Theories, Answer Set Programming (ASP), and Constraint ASP, has been relatively recently solved employing abstract solvers. In this paper we deal with cautious reasoning tasks in ASP, and design, implement and test novel abstract solutions, borrowed from backbone computation in SAT. By employing abstract solvers, we also formally show that the algorithms for solving cautious reasoning tasks in ASP are strongly related to those for computing backbones of Boolean formulas. Some of the new solutions have been implemented in the ASP solver WASP, and tested.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zhu Hongbiao ◽  
Yueming Liu ◽  
Weidong Wang ◽  
Zhijiang Du

Purpose This paper aims to present a new method to analyze the robot’s obstacle negotiation based on the terramechanics, where the terrain physical parameters, the sinkage and the slippage of the robot are taken into account, to enhance the robot’s trafficability. Design/methodology/approach In this paper, terramechanics is used in motion planning for all-terrain obstacle negotiation. First, wheel/track-terrain interaction models are established and used to analyze traction performances in different locomotion modes of the reconfigurable robot. Next, several key steps of obstacle-climbing are reanalyzed and the sinkage, the slippage and the drawbar pull are obtained by the models in these steps. In addition, an obstacle negotiation analysis method on loose soil is proposed. Finally, experiments in different locomotion modes are conducted and the results demonstrate that the model is more suitable for practical applications than the center of gravity (CoG) kinematic model. Findings Using the traction performance experimental platform, the relationships between the drawbar pull and the slippage in different locomotion modes are obtained, and then the traction performances are obtained. The experimental results show that the relationships obtained by the models are in good agreement with the measured. The obstacle-climbing experiments are carried out to confirm the availability of the method, and the experimental results demonstrate that the model is more suitable for practical applications than the CoG kinematic model. Originality/value Comparing with the results without considering Terramechanics, obstacle-negotiation analysis based on the proposed track-terrain interaction model considering Terramechanics is much more accurate than without considering Terramechanics.


Sign in / Sign up

Export Citation Format

Share Document