scholarly journals Using linear algebra in decomposition of Farkas interpolants

Author(s):  
Martin Blicha ◽  
Antti E. J. Hyvärinen ◽  
Jan Kofroň ◽  
Natasha Sharygina

AbstractThe use of propositional logic and systems of linear inequalities over reals is a common means to model software for formal verification. Craig interpolants constitute a central building block in this setting for over-approximating reachable states, e.g. as candidates for inductive loop invariants. Interpolants for a linear system can be efficiently computed from a Simplex refutation by applying the Farkas’ lemma. However, these interpolants do not always suit the verification task—in the worst case, they can even prevent the verification algorithm from converging. This work introduces the decomposed interpolants, a fundamental extension of the Farkas interpolants, obtained by identifying and separating independent components from the interpolant structure, using methods from linear algebra. We also present an efficient polynomial algorithm to compute decomposed interpolants and analyse its properties. We experimentally show that the use of decomposed interpolants in model checking results in immediate convergence on instances where state-of-the-art approaches diverge. Moreover, since being based on the efficient Simplex method, the approach is very competitive in general.

Author(s):  
Gregor Behnke ◽  
Daniel Höller ◽  
Susanne Biundo

HTN planning provides an expressive formalism to model complex application domains. It has been widely used in realworld applications. However, the development of domainindependent planning techniques for such models is still lacking behind. The need to be informed about both statetransitions and the task hierarchy makes the realisation of search-based approaches difficult, especially with unrestricted partial ordering of tasks in HTN domains. Recently, a translation of HTN planning problems into propositional logic has shown promising empirical results. Such planners benefit from a unified representation of state and hierarchy, but until now require very large formulae to represent partial order. In this paper, we introduce a novel encoding of HTN Planning as SAT. In contrast to related work, most of the reasoning on ordering relations is not left to the SAT solver, but done beforehand. This results in much smaller formulae and, as shown in our evaluation, in a planner that outperforms previous SAT-based approaches as well as the state-of-the-art in search-based HTN planning.


Algorithms ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 183
Author(s):  
Canh V. Pham ◽  
Dung K. T. Ha ◽  
Quang C. Vu ◽  
Anh N. Su ◽  
Huan X. Hoang

The Influence Maximization (IM) problem, which finds a set of k nodes (called seedset) in a social network to initiate the influence spread so that the number of influenced nodes after propagation process is maximized, is an important problem in information propagation and social network analysis. However, previous studies ignored the constraint of priority that led to inefficient seed collections. In some real situations, companies or organizations often prioritize influencing potential users during their influence diffusion campaigns. With a new approach to these existing works, we propose a new problem called Influence Maximization with Priority (IMP) which finds out a set seed of k nodes in a social network to be able to influence the largest number of nodes subject to the influence spread to a specific set of nodes U (called priority set) at least a given threshold T in this paper. We show that the problem is NP-hard under well-known IC model. To find the solution, we propose two efficient algorithms, called Integrated Greedy (IG) and Integrated Greedy Sampling (IGS) with provable theoretical guarantees. IG provides a 1−(1−1k)t-approximation solution with t is an outcome of algorithm and t≥1. The worst-case approximation ratio is obtained when t=1 and it is equal to 1/k. In addition, IGS is an efficient randomized approximation algorithm based on sampling method that provides a 1−(1−1k)t−ϵ-approximation solution with probability at least 1−δ with ϵ>0,δ∈(0,1) as input parameters of the problem. We conduct extensive experiments on various real networks to compare our IGS algorithm to the state-of-the-art algorithms in IM problem. The results indicate that our algorithm provides better solutions interns of influence on the priority sets when approximately give twice to ten times higher than threshold T while running time, memory usage and the influence spread also give considerable results compared to the others.


Author(s):  
Manar Abduljabbar Mizher ◽  
Mei Choo Ang ◽  
Ahmad Abdel Jabbar Mazhar

Key frame extraction is an essential technique in the computer vision field. The extracted key frames should brief the salient events with an excellent feasibility, great efficiency, and with a high-level of robustness. Thus, it is not an easy problem to solve because it is attributed to many visual features. This paper intends to solve this problem by investigating the relationship between these features detection and the accuracy of key frames extraction techniques using TRIZ. An improved algorithm for key frame extraction was then proposed based on an accumulative optical flow with a self-adaptive threshold (AOF_ST) as recommended in TRIZ inventive principles. Several video shots including original and forgery videos with complex conditions are used to verify the experimental results. The comparison of our results with the-state-of-the-art algorithms results showed that the proposed extraction algorithm can accurately brief the videos and generated a meaningful compact count number of key frames. On top of that, our proposed algorithm achieves 124.4 and 31.4 for best and worst case in KTH dataset extracted key frames in terms of compression rate, while the the-state-of-the-art algorithms achieved 8.90 in the best case.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1420
Author(s):  
Barlian Henryranu Prasetio ◽  
Hiroki Tamura ◽  
Koichi Tanno

Emotional conditions cause changes in the speech production system. It produces the differences in the acoustical characteristics compared to neutral conditions. The presence of emotion makes the performance of a speaker verification system degrade. In this paper, we propose a speaker modeling that accommodates the presence of emotions on the speech segments by extracting a speaker representation compactly. The speaker model is estimated by following a similar procedure to the i-vector technique, but it considerate the emotional effect as the channel variability component. We named this method as the emotional variability analysis (EVA). EVA represents the emotion subspace separately to the speaker subspace, like the joint factor analysis (JFA) model. The effectiveness of the proposed system is evaluated by comparing it with the standard i-vector system in the speaker verification task of the Speech Under Simulated and Actual Stress (SUSAS) dataset with three different scoring methods. The evaluation focus in terms of the equal error rate (EER). In addition, we also conducted an ablation study for a more comprehensive analysis of the EVA-based i-vector. Based on experiment results, the proposed system outperformed the standard i-vector system and achieved state-of-the-art results in the verification task for the under-stressed speakers.


Algorithms ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 12 ◽  
Author(s):  
Jérémy Barbay

We describe an algorithm computing an optimal prefix free code for n unsorted positive weights in time within O ( n ( 1 + lg α ) ) ⊆ O ( n lg n ) , where the alternation α ∈ [ 1 . . n − 1 ] approximates the minimal amount of sorting required by the computation. This asymptotical complexity is within a constant factor of the optimal in the algebraic decision tree computational model, in the worst case over all instances of size n and alternation α . Such results refine the state of the art complexity of Θ ( n lg n ) in the worst case over instances of size n in the same computational model, a landmark in compression and coding since 1952. Beside the new analysis technique, such improvement is obtained by combining a new algorithm, inspired by van Leeuwen’s algorithm to compute optimal prefix free codes from sorted weights (known since 1976), with a relatively minor extension of Karp et al.’s deferred data structure to partially sort a multiset accordingly to the queries performed on it (known since 1988). Preliminary experimental results on text compression by words show α to be polynomially smaller than n, which suggests improvements by at most a constant multiplicative factor in the running time for such applications.


Author(s):  
Johan S. Carlson ◽  
Rikard So¨derberg ◽  
Robert Bohlin ◽  
Lars Lindkvist ◽  
Tomas Hermansson

One important aspect in the assembly process design is to assure that there exist a collision-free assembly path for each part and subassembly. In order to reduce the need of physical verification the automotive industry use digital mock-up tool with collision checking for this kind of geometrical assembly analysis. To manually verify assembly feasibility in a digital mock-up tool can be hard and time consuming. Therefore, the recent development of efficient and effective automatic path planning algorithm and tools are highly motivated. However, in real production, all equipment, parts and subassemblies are inflicted by geometrical variation, often resulting in conflicts and on-line adjustments of off-line generated assembly paths. To avoid problems with on-line adjustments, state-of-the-art tools for path-planning can handle tolerances by a general clearance for all geometry. This is a worst-case strategy, not taking account for how part and assembly variation propagates through the positioning systems of the assembly resulting in geometry areas of both high and low degree of variation. Since, this latter approach results in unnecessary design changes or in too tight tolerances we have developed a new algorithm and working procedure enabling and supporting a more cost effective non-nominal path planning process for assembly operations. The basic idea of the paper is to combine state of the art technology within variation simulation and automatic path planning. By integrating variation and tolerance simulation results into the path planning algorithm we can allow the assembly path going closer to areas of low variation, while avoiding areas of high variation. The benefits of the proposed approach are illustrated on an industrial case from the automotive industry.


Author(s):  
Camillo Fiorentini

AbstractWe present an efficient proof search procedure for Intuitionistic Propositional Logic which involves the use of an incremental SAT-solver. Basically, it is obtained by adding a restart operation to the system by Claessen and Rosén, thus we call our implementation . We gain some remarkable advantages: derivations have a simple structure; countermodels are in general small; using a standard benchmarks suite, we outperform and other state-of-the-art provers.


2021 ◽  
Author(s):  
Nathan Lassance ◽  
Victor DeMiguel ◽  
Frédéric Vrins

A natural approach to enhance portfolio diversification is to rely on factor-risk parity, which yields the portfolio whose risk is equally spread among a set of uncorrelated factors. The standard choice is to take the variance as risk measure, and the principal components (PCs) of asset returns as factors. Although PCs are unique and useful for dimension reduction, they are an arbitrary choice: any rotation of the PCs results in uncorrelated factors. This is problematic because we demonstrate that any portfolio is a factor-variance-parity portfolio for some rotation of the PCs. More importantly, choosing the PCs does not account for the higher moments of asset returns. To overcome these issues, we propose using the independent components (ICs) as factors, which are the rotation of the PCs that are maximally independent, and care about higher moments of asset returns. We demonstrate that using the IC-variance-parity portfolio helps to reduce the return kurtosis. We also show how to exploit the near independence of the ICs to parsimoniously estimate the factor-risk-parity portfolio based on value at risk. Finally, we empirically demonstrate that portfolios based on ICs outperform those based on PCs, and several state-of-the-art benchmarks.


Author(s):  
Dirk Beyer

Abstract This report describes the 2020 Competition on Software Verification (SV-COMP), the 9$$^{\text {th}}$$ edition of a series of comparative evaluations of fully automatic software verifiers for C and Java programs. The competition provides a snapshot of the current state of the art in the area, and has a strong focus on replicability of its results. The competition was based on 11 052 verification tasks for C programs and 416 verification tasks for Java programs. Each verification task consisted of a program and a property (reachability, memory safety, overflows, termination). SV-COMP 2020 had 28 participating verification systems from 11 countries.


Sign in / Sign up

Export Citation Format

Share Document