scholarly journals Conflict Generalisation in ASP: Learning Correct and Effective Non-Ground Constraints

2020 ◽  
Vol 20 (5) ◽  
pp. 799-814
Author(s):  
RICHARD TAUPE ◽  
ANTONIUS WEINZIERL ◽  
GERHARD FRIEDRICH

AbstractGeneralising and re-using knowledge learned while solving one problem instance has been neglected by state-of-the-art answer set solvers. We suggest a new approach that generalises learned nogoods for re-use to speed-up the solving of future problem instances. Our solution combines well-known ASP solving techniques with deductive logic-based machine learning. Solving performance can be improved by adding learned non-ground constraints to the original program. We demonstrate the effects of our method by means of realistic examples, showing that our approach requires low computational cost to learn constraints that yield significant performance benefits in our test cases. These benefits can be seen with ground-and-solve systems as well as lazy-grounding systems. However, ground-and-solve systems suffer from additional grounding overheads, induced by the additional constraints in some cases. By means of conflict minimization, non-minimal learned constraints can be reduced. This can result in significant reductions of grounding and solving efforts, as our experiments show.

2018 ◽  
Vol 2018 ◽  
pp. 1-12
Author(s):  
Yun-Hua Wu ◽  
Lin-Lin Ge ◽  
Feng Wang ◽  
Bing Hua ◽  
Zhi-Ming Chen ◽  
...  

In order to satisfy the real-time requirement of spacecraft autonomous navigation using natural landmarks, a novel algorithm called CSA-SURF (chessboard segmentation algorithm and speeded up robust features) is proposed to improve the speed without loss of repeatability performance of image registration progress. It is a combination of chessboard segmentation algorithm and SURF. Here, SURF is used to extract the features from satellite images because of its scale- and rotation-invariant properties and low computational cost. CSA is based on image segmentation technology, aiming to find representative blocks, which will be allocated to different tasks to speed up the image registration progress. To illustrate the advantages of the proposed algorithm, PCA-SURF, which is the combination of principle component analysis and SURF, is also analyzed in this paper for comparison. Furthermore, random sample consensus (RANSAC) algorithm is applied to eliminate the false matches for further accuracy improvement. The simulation results show that the proposed strategy obtains good results, especially in scaling and rotation variation. Besides, CSA-SURF decreased 50% of the time in extraction and 90% of the time in matching without losing the repeatability performance by comparing with SURF algorithm. The proposed method has been demonstrated as an alternative way for image registration of spacecraft autonomous navigation using natural landmarks.


2014 ◽  
Vol 24 (4) ◽  
pp. 901-916
Author(s):  
Zoltán Ádám Mann ◽  
Tamás Szép

Abstract Backtrack-style exhaustive search algorithms for NP-hard problems tend to have large variance in their runtime. This is because “fortunate” branching decisions can lead to finding a solution quickly, whereas “unfortunate” decisions in another run can lead the algorithm to a region of the search space with no solutions. In the literature, frequent restarting has been suggested as a means to overcome this problem. In this paper, we propose a more sophisticated approach: a best-firstsearch heuristic to quickly move between parts of the search space, always concentrating on the most promising region. We describe how this idea can be efficiently incorporated into a backtrack search algorithm, without sacrificing optimality. Moreover, we demonstrate empirically that, for hard solvable problem instances, the new approach provides significantly higher speed-up than frequent restarting.


2016 ◽  
Vol 21 (3) ◽  
pp. 69-79 ◽  
Author(s):  
Abdelkhalek Bakkari ◽  
Anna Fabijańska

Abstract In this paper, the problem of segmentation of 3D Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) brain images is considered. A supervoxel-based segmentation is regarded. In particular, a new approach called Relative Linear Interactive Clustering (RLIC) is introduced. The method, dedicated to image division into super-voxels, is an extension of the Simple Linear Interactive Clustering (SLIC) super-pixels algorithm. During RLIC execution firstly, the cluster centres and the regular grid size are initialized. These are next clustered by Fuzzy C-Means algorithm. Then, the extraction of the super-voxels statistical features is performed. The method contributes with 3D images and serves fully volumetric image segmentation. Five cases are tested demonstrating that our Relative Linear Interactive Clustering (RLIC) is apt to handle huge size of images with a significant accuracy and a low computational cost. The results of applying the suggested method to segmentation of the brain tumour are exposed and discussed.


Axioms ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 105
Author(s):  
Pavel Rajmic ◽  
Pavel Záviška ◽  
Vítězslav Veselý ◽  
Ondřej Mokrý

In convex optimization, it is often inevitable to work with projectors onto convex sets composed with a linear operator. Such a need arises from both the theory and applications, with signal processing being a prominent and broad field where convex optimization has been used recently. In this article, a novel projector is presented, which generalizes previous results in that it admits to work with a broader family of linear transforms when compared with the state of the art but, on the other hand, it is limited to box-type convex sets in the transformed domain. The new projector is described by an explicit formula, which makes it simple to implement and requires a low computational cost. The projector is interpreted within the framework of the so-called proximal splitting theory. The convenience of the new projector is demonstrated on an example from signal processing, where it was possible to speed up the convergence of a signal declipping algorithm by a factor of more than two.


2016 ◽  
Author(s):  
Qianqian Zhu ◽  
Annibale Panichella ◽  
Andy Zaidman

Mutation testing is widely considered as a high-end test criterion due to the vast number of mutants it generates. Although many efforts have been made to reduce the computational cost of mutation testing, its scalability issue remains in practice. In this paper, we introduce a novel method to speed up mutation testing based on state infection information. In addition to filtering out uninfected test executions, we further select a subset of mutants and a subset of test cases to run leveraging data-compression techniques. In particular, we adopt Formal Concept Analysis (FCA) to group similar mutants together and then select test cases to cover these mutants. To evaluate our method, we conducted an experimental study on six open source Java projects. We used EvoSuite to automatically generate test cases and to collect mutation data. The initial results show that our method can reduce the execution time by 83.93% with only 0.257% loss in precision.


2018 ◽  
Vol 28 (5) ◽  
pp. 1218-1236 ◽  
Author(s):  
Cédric Decrocq ◽  
Bastien Martinez ◽  
Marie Albisser ◽  
Simona Dobre ◽  
Patrick Gnemmi ◽  
...  

Purpose The present paper deals with weapon aerodynamics and aims to describe preliminary studies that were conducted for developing the next generation of long-range guided ammunition. Over history, ballistic research scientists were constantly investigating new artillery systems capable of overcoming limitations of range, accuracy and manoeuvrability. While futuristic technologies are increasingly under development, numerous issues concerning current powdered systems still need to be addressed. In this context, the present work deals with the design and the optimization of a new concept of long-range projectile with regard to multidisciplinary fields, including flight scenario, steering strategy, mechanical actuators or size of payload. Design/methodology/approach Investigations are conducted for configurations that combine existing full calibre 155 mm guided artillery shell with a set of lifting surfaces. As the capability of the ammunition highly depends on lifting surfaces in terms of number, shape or position, a parametric study has to be conducted for determining the best aerodynamic architecture. To speed-up this process, initial estimations are conducted thanks to low computational cost methods suitable for preliminary design requirements, in terms of time, accuracy and flexibility. The WASP code (Wing-Aerodynamic-eStimation-for-Projectiles) has been developed for rapidly predicting aerodynamic coefficients (static and dynamic) of a set of lifting surfaces fitted on a projectile fuselage, as a function of geometry and flight conditions, up to transonic velocities. Findings In the present study, WASP predictions at Mach 0.7 of both normal force and pitching moment coefficients are assessed for two configurations. Originality/value Analysis is conducted by gathering results from WASP, computational-fluid-dynamics (CFD) simulations, wind-tunnel experiments and free-flight tests. Obtained results demonstrate the ability of WASP code to be used for preliminary design steps.


2021 ◽  
Vol 2 ◽  
Author(s):  
Abel Sancarlos ◽  
Morgan Cameron ◽  
Jean-Marc Le Peuvedic ◽  
Juliette Groulier ◽  
Jean-Louis Duval ◽  
...  

Abstract The concept of “hybrid twin” (HT) has recently received a growing interest thanks to the availability of powerful machine learning techniques. This twin concept combines physics-based models within a model order reduction framework—to obtain real-time feedback rates—and data science. Thus, the main idea of the HT is to develop on-the-fly data-driven models to correct possible deviations between measurements and physics-based model predictions. This paper is focused on the computation of stable, fast, and accurate corrections in the HT framework. Furthermore, regarding the delicate and important problem of stability, a new approach is proposed, introducing several subvariants and guaranteeing a low computational cost as well as the achievement of a stable time-integration.


2013 ◽  
Vol 10 (04) ◽  
pp. 1350022 ◽  
Author(s):  
HAMED SAFFARI ◽  
ABDOLHOSSEIN BAGHLANI ◽  
NADIA M. MIRZAI ◽  
IMAN MANSOURI

In this paper, a new approach is presented to accelerate the nonlinear analysis of structures with low computational cost. The method is essentially based on Newton–Raphson method, which has been improved in each iteration to achieve faster convergence. The normal flow algorithm has been employed to pass successfully through the limit points and through the entire equilibrium path. Subsequently, numerical examples are performed to demonstrate the efficiency of the formulation. The results show better performance, accuracy and rate of convergence of the present method to deal with nonlinear analysis of structures.


Author(s):  
Gudavalli Sai Abhilash ◽  
Kantheti Rajesh ◽  
Jangam Dileep Shaleem ◽  
Grandi Sai Sarath ◽  
Palli R Krishna Prasad

The creation and deployment of face recognition models need to identify low-resolution faces with extremely low computational cost. To address this problem, a feasible solution is compressing a complex face model to achieve higher speed and lower memory at the cost of minimal performance drop. Inspired by that, this paper proposes a learning approach to recognize low-resolution faces via selective knowledge distillation in live video. In this approach, a two-stream convolution neural network (CNN) is first initialized to recognize high-resolution faces and resolution-degraded faces with a teacher stream and a student stream, respectively. The teacher stream is represented by a complex CNN for high-accuracy recognition, and the student stream is represented by a much simpler CNN for low-complexity recognition. To avoid significant performance drop at the student stream, we then selectively distil the most informative facial features from the teacher stream by solving a sparse graph optimization problem, which are then used to regularize the fine- tuning process of the student stream. In this way, the student stream is actually trained by simultaneously handling two tasks with limited computational resources approximating the most informative facial cues via feature regression, and recovering the missing facial cues via low-resolution face classification.


2016 ◽  
Author(s):  
Qianqian Zhu ◽  
Annibale Panichella ◽  
Andy Zaidman

Mutation testing is widely considered as a high-end test criterion due to the vast number of mutants it generates. Although many efforts have been made to reduce the computational cost of mutation testing, its scalability issue remains in practice. In this paper, we introduce a novel method to speed up mutation testing based on state infection information. In addition to filtering out uninfected test executions, we further select a subset of mutants and a subset of test cases to run leveraging data-compression techniques. In particular, we adopt Formal Concept Analysis (FCA) to group similar mutants together and then select test cases to cover these mutants. To evaluate our method, we conducted an experimental study on six open source Java projects. We used EvoSuite to automatically generate test cases and to collect mutation data. The initial results show that our method can reduce the execution time by 83.93% with only 0.257% loss in precision.


Sign in / Sign up

Export Citation Format

Share Document