enumeration algorithms
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 15)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 55 (5) ◽  
pp. 1136-1150
Author(s):  
Giovanni Righini

The single source Weber problem with limited distances (SSWPLD) is a continuous optimization problem in location theory. The SSWPLD algorithms proposed so far are based on the enumeration of all regions of [Formula: see text] defined by a given set of n intersecting circumferences. Early algorithms require [Formula: see text] time for the enumeration, but they were recently shown to be incorrect in case of degenerate intersections, that is, when three or more circumferences pass through the same intersection point. This problem was fixed by a modified enumeration algorithm with complexity [Formula: see text], based on the construction of neighborhoods of degenerate intersection points. In this paper, it is shown that the complexity for correctly dealing with degenerate intersections can be reduced to [Formula: see text] so that existing enumeration algorithms can be fixed without increasing their [Formula: see text] time complexity, which is due to some preliminary computations unrelated to intersection degeneracy. Furthermore, a new algorithm for enumerating all regions to solve the SSWPLD is described: its worst-case time complexity is [Formula: see text]. The new algorithm also guarantees that the regions are enumerated only once.


Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1618
Author(s):  
Hami Satılmış ◽  
Sedat Akleylek ◽  
Cheng-Chi Lee

The security of lattice-based cryptosystems is based on solving hard lattice problems such as the shortest vector problem (SVP) and the closest vector problem (CVP). Various cryptanalysis algorithms such as (Pro)GaussSieve, HashSieve, ENUM, and BKZ have been proposed to solve these hard problems. Several implementations of these algorithms have been developed. On the other hand, the implementations of these algorithms are expected to be efficient in terms of run time and memory space. In this paper, a modular software package/library containing efficient implementations of GaussSieve, ProGaussSieve, HashSieve, and BKZ algorithms is developed. These implementations are considered efficient in terms of run time. While constructing this software library, some modifications to the algorithms are made to increase the performance. Then, the run times of these implementations are compared with the others. According to the experimental results, the proposed GaussSieve, ProGaussSieve, and HashSieve implementations are at least 70%, 75%, and 49% more efficient than previous ones, respectively.


Author(s):  
Yaoling Ding ◽  
Liehuang Zhu ◽  
An Wang ◽  
Yuan Li ◽  
Yongjuan Wang ◽  
...  

Side-channel analysis achieves key recovery by analyzing physical signals generated during the operation of cryptographic devices. Power consumption is one kind of these signals and can be regarded as a multimedia form. In recent years, many artificial intelligence technologies have been combined with classical side-channel analysis methods to improve the efficiency and accuracy. A simple genetic algorithm was employed in Correlation Power Analysis (CPA) when apply to cryptographic algorithms implemented in parallel. However, premature convergence caused failure in recovering the whole key, especially when plenty of large S-boxes were employed in the target primitive, such as in the case of AES. In this article, we investigate the reason of premature convergence and propose a Multiple Sieve Method (MS-CPA), which overcomes this problem and reduces the number of traces required in correlation power analysis. Our method can be adjusted to combine with key enumeration algorithms and further improves the efficiency. Simulation experimental results depict that our method reduces the required number of traces by and , compared to classic CPA and the Simple-Genetic-Algorithm-based CPA (SGA-CPA), respectively, when the success rate is fixed to . Real experiments performed on SAKURA-G confirm that the number of traces required for recovering the correct key in our method is almost equal to the minimum number that makes the correlation coefficients of correct keys stand out from the wrong ones and is much less than the numbers of traces required in CPA and SGA-CPA. When combining with key enumeration algorithms, our method has better performance. For the traces number being 200 (noise standard deviation ), the attacks success rate of our method is , which is much higher than the classic CPA with key enumeration ( success rate). Moreover, we adjust our method to work on that DPA contest v1 dataset and achieve a better result (40.04 traces) than the winning proposal (42.42 traces).


Author(s):  
Alessio Conte ◽  
Donatella Firmani ◽  
Maurizio Patrignani ◽  
Riccardo Torlone

AbstractWe focus on the automatic detection of communities in large networks, a challenging problem in many disciplines (such as sociology, biology, and computer science). Humans tend to associate to form families, villages, and nations. Similarly, the elements of real-world networks naturally tend to form highly connected groups. A popular model to represent such structures is the clique, that is, a set of fully interconnected nodes. However, it has been observed that cliques are too strict to represent communities in practice. The k-plex relaxes the notion of clique, by allowing each node to miss up to k connections. Although k-plexes are more flexible than cliques, finding them is more challenging as their number is greater. In addition, most of them are small and not significant. In this paper we tackle the problem of finding only large k-plexes (i.e., comparable in size to the largest clique) and design a meta-algorithm that can be used on top of known enumeration algorithms to return only significant k-plexes in a fraction of the time. Our approach relies on: (1) methods for strongly reducing the search space and (2) decomposition techniques based on the efficient computation of maximal cliques. We demonstrate experimentally that known enumeration algorithms equipped with our approach can run orders of magnitude faster than full enumeration.


2021 ◽  
Vol 46 (1) ◽  
pp. 1-30
Author(s):  
Antoine Amarilli ◽  
Pierre Bourhis ◽  
Stefan Mengel ◽  
Matthias Niewerth

We consider the information extraction framework known as document spanners and study the problem of efficiently computing the results of the extraction from an input document, where the extraction task is described as a sequential variable-set automaton (VA). We pose this problem in the setting of enumeration algorithms, where we can first run a preprocessing phase and must then produce the results with a small delay between any two consecutive results. Our goal is to have an algorithm that is tractable in combined complexity, i.e., in the sizes of the input document and the VA, while ensuring the best possible data complexity bounds in the input document size, i.e., constant delay in the document size. Several recent works at PODS’18 proposed such algorithms but with linear delay in the document size or with an exponential dependency in size of the (generally nondeterministic) input VA. In particular, Florenzano et al. suggest that our desired runtime guarantees cannot be met for general sequential VAs. We refute this and show that, given a nondeterministic sequential VA and an input document, we can enumerate the mappings of the VA on the document with the following bounds: the preprocessing is linear in the document size and polynomial in the size of the VA, and the delay is independent of the document and polynomial in the size of the VA. The resulting algorithm thus achieves tractability in combined complexity and the best possible data complexity bounds. Moreover, it is rather easy to describe, particularly for the restricted case of so-called extended VAs. Finally, we evaluate our algorithm empirically using a prototype implementation.


Author(s):  
Chenglin Chang ◽  
Zuwei Liao ◽  
André Costa ◽  
Miguel Bagajewicz

In this work, the enumeration algorithms presented in parts I and II for the globally optimal synthesis of heat exchanger networks are extended to consider non-isothermal mixing. The previous models are modified by adding non-isothermal mixing constraints and new models are constructed to target the bounds of the energy consumption and the binding exchanger minimum approximation temperature. These new models are solved using algorithms that involve solving the solution of systems of equations instead of mathematical programming. We also present two alternatives for optimizing each enumerated structure, namely, the use of a global solver, or the use of a golden search with simple resolution of non-isothermal mixing model for fixed energy consumption. The non-isothermal mixing model is reformulated as a convex model, either solved using nonlinear programming or a programming-free methodology, i.e. solving Karush-Kuhn-Tucker equations. A global optimum search algorithm is developed and examples are tested comparing the proposed strategies.


2020 ◽  
Vol 15 (1) ◽  
pp. 60-71
Author(s):  
Thijs Laarhoven

AbstractWe revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis–Laarhoven–De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than $2^{0.076d + o(d)}$ memory, and we show how to obtain time–memory trade-offs even when using less than $2^{0.048d + o(d)}$ memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as $d^{d/2 + o(d)}$ matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem.


2020 ◽  
pp. 1-23
Author(s):  
Hendrik Molter ◽  
Rolf Niedermeier ◽  
Malte Renken

Abstract Isolation is a concept originally conceived in the context of clique enumeration in static networks, mostly used to model communities that do not have much contact to the outside world. Herein, a clique is considered isolated if it has few edges connecting it to the rest of the graph. Motivated by recent work on enumerating cliques in temporal networks, we transform the isolation concept to the temporal setting. We discover that the addition of the time dimension leads to six distinct natural isolation concepts. Our main contribution is the development of parameterized enumeration algorithms for five of these six isolation types for clique enumeration, employing the parameter “degree of isolation.” In a nutshell, this means that the more isolated these cliques are, the faster we can find them. On the empirical side, we implemented and tested these algorithms on (temporal) social network data, obtaining encouraging results.


10.29007/8btb ◽  
2020 ◽  
Author(s):  
Jaroslav Bendík ◽  
Ivana Cerna

Given an unsatisfiable Boolean Formula F in CNF, i.e., a set of clauses, one is often interested in identifying Maximal Satisfiable Subsets (MSSes) of F or, equivalently, the complements of MSSes called Minimal Correction Subsets (MCSes). Since MSSes (MC- Ses) find applications in many domains, e.g. diagnosis, ontologies debugging, or axiom pinpointing, several MSS enumeration algorithms have been proposed. Unfortunately, finding even a single MSS is often very hard since it naturally subsumes repeatedly solving the satisfiability problem. Moreover, there can be up to exponentially many MSSes, thus their complete enumeration is often practically intractable. Therefore, the algorithms tend to identify as many MSSes as possible within a given time limit. In this work, we present a novel MSS enumeration algorithm called RIME. Compared to existing algorithms, RIME is much more frugal in the number of performed satisfiability checks which we witness via an experimental comparison. Moreover, RIME is several times faster than existing tools.


2020 ◽  
Vol 45 (1) ◽  
pp. 1-42 ◽  
Author(s):  
Fernando Florenzano ◽  
Cristian Riveros ◽  
Martín Ugarte ◽  
Stijn Vansummeren ◽  
Domagoj Vrgoč

Sign in / Sign up

Export Citation Format

Share Document