scholarly journals Approximate Voronoi cells for lattices, revisited

2020 ◽  
Vol 15 (1) ◽  
pp. 60-71
Author(s):  
Thijs Laarhoven

AbstractWe revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis–Laarhoven–De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than $2^{0.076d + o(d)}$ memory, and we show how to obtain time–memory trade-offs even when using less than $2^{0.048d + o(d)}$ memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as $d^{d/2 + o(d)}$ matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem.

2011 ◽  
Vol 03 (04) ◽  
pp. 457-471 ◽  
Author(s):  
B. BALAMOHAN ◽  
P. FLOCCHINI ◽  
A. MIRI ◽  
N. SANTORO

In a network environment supporting mobile entities (called robots or agents), a black hole is a harmful site that destroys any incoming entity without leaving any visible trace. The black-hole search problit is the task of a team of k > 1 mobile entities, starting from the same safe location and executing the same algorithm, to determine within finite time the location of the black hole. In this paper, we consider the black hole search problit in asynchronous ring networks of n nodes, and focus on time complexity. It is known that any algorithm for black-hole search in a ring requires at least 2(n - 2) time in the worst case. The best known algorithm achieves this bound with a team of n - 1 agents with an average time cost of 2(n - 2), equal to the worst case. In this paper, we first show how the same number of agents using 2 extra time units in the worst case, can solve the problit in only [Formula: see text] time on the average. We then prove that the optimal average case complexity of [Formula: see text] can be achieved without increasing the worst case using 2(n - 1) agents. Finally, we design an algorithm that achieves asymptotically optimal both worst and average case time complexities itploying an optimal team of k = 2 agents, thus improving on the earlier results that required O(n) agents.


2020 ◽  
Author(s):  
Ahsan Sanaullah ◽  
Degui Zhi ◽  
Shaojie Zhang

AbstractDurbin’s PBWT, a scalable data structure for haplotype matching, has been successfully applied to identical by descent (IBD) segment identification and genotype imputation. Once the PBWT of a haplotype panel is constructed, it supports efficient retrieval of all shared long segments among all individuals (long matches) and efficient query between an external haplotype and the panel. However, the standard PBWT is an array-based static data structure and does not support dynamic updates of the panel. Here, we generalize the static PBWT to a dynamic data structure, d-PBWT, where the reverse prefix sorting at each position is represented by linked lists. We developed efficient algorithms for insertion and deletion of individual haplotypes. In addition, we verified that d-PBWT can support all algorithms of PBWT. In doing so, we systematically investigated variations of set maximal match and long match query algorithms: while they all have average case time complexity independent of database size, they have different worst case complexities, linear time complexity with the size of the genome, and dependency on additional data structures.


Author(s):  
Adrijan Božinovski ◽  
George Tanev ◽  
Biljana Stojčevska ◽  
Veno Pačovski ◽  
Nevena Ackovska

This paper presents the time complexity analysis of the Binary Tree Roll algorithm. The time complexity is analyzed theoretically and the results are then confirmed empirically. The theoretical analysis consists of finding recurrence relations for the time complexity, and solving them using various methods. The empirical analysis consists of exhaustively testing all trees with given numbers of nodes  and counting the minimum and maximum steps necessary to complete the roll algorithm. The time complexity is shown, both theoretically and empirically, to be linear in the best case and quadratic in the worst case, whereas its average case is shown to be dominantly linear for trees with a relatively small number of nodes and dominantly quadratic otherwise.


2021 ◽  
Author(s):  
Preeti Sharma

Evacuation problems fall under the vast area of search theory and operations research. Problems of evacuation of two robots on a unit disc have been studied for an efficient evacuation time. Work done so far has focused on improving the ’worst-case’ evacuation time with deterministic algorithms. We study the ’average-case’ evacuation time (randomized algorithms) while considering the efficiency trade-off between worst-case and average-case costs. Our other contribution is to analyze average-case and worst-case costs for the cowpath problem (another search problem) which helped us to set a parallel method for the evacuation problem.


2021 ◽  
Vol 55 (5) ◽  
pp. 1136-1150
Author(s):  
Giovanni Righini

The single source Weber problem with limited distances (SSWPLD) is a continuous optimization problem in location theory. The SSWPLD algorithms proposed so far are based on the enumeration of all regions of [Formula: see text] defined by a given set of n intersecting circumferences. Early algorithms require [Formula: see text] time for the enumeration, but they were recently shown to be incorrect in case of degenerate intersections, that is, when three or more circumferences pass through the same intersection point. This problem was fixed by a modified enumeration algorithm with complexity [Formula: see text], based on the construction of neighborhoods of degenerate intersection points. In this paper, it is shown that the complexity for correctly dealing with degenerate intersections can be reduced to [Formula: see text] so that existing enumeration algorithms can be fixed without increasing their [Formula: see text] time complexity, which is due to some preliminary computations unrelated to intersection degeneracy. Furthermore, a new algorithm for enumerating all regions to solve the SSWPLD is described: its worst-case time complexity is [Formula: see text]. The new algorithm also guarantees that the regions are enumerated only once.


2019 ◽  
Vol 2019 (2) ◽  
pp. 166-186
Author(s):  
Hans Hanley ◽  
Yixin Sun ◽  
Sameer Wagh ◽  
Prateek Mittal

Abstract Recent work has shown that Tor is vulnerable to attacks that manipulate inter-domain routing to compromise user privacy. Proposed solutions such as Counter-RAPTOR [29] attempt to ameliorate this issue by favoring Tor entry relays that have high resilience to these attacks. However, because these defenses bias Tor path selection on the identity of the client, they invariably leak probabilistic information about client identities. In this work, we make the following contributions. First, we identify a novel means to quantify privacy leakage in guard selection algorithms using the metric of Max-Divergence. Max-Divergence ensures that probabilistic privacy loss is within strict bounds while also providing composability over time. Second, we utilize Max-Divergence and multiple notions of entropy to understand privacy loss in the worst-case for Counter-RAPTOR. Our worst-case analysis provides a fresh perspective to the field, as prior work such as Counter-RAPTOR only analyzed average case-privacy loss. Third, we propose modifications to Counter-RAPTOR that incorporate worst-case Max-Divergence in its design. Specifically, we utilize the exponential mechanism (a mechanism for differential privacy) to guarantee a worst-case bound on Max-Divergence/privacy loss. For the quality function used in the exponential mechanism, we show that a Monte-Carlo sampling-based method for stochastic optimization can be used to improve multi-dimensional trade-offs between security, privacy, and performance. Finally, we demonstrate that compared to Counter-RAPTOR, our approach achieves an 83% decrease in Max-Divergence after one guard selection and a 245% increase in worst-case Shannon entropy after 5 guard selections. Notably, experimental evaluations using the Shadow emulator shows that our approach provides these privacy benefits with minimal impact on system performance.


Author(s):  
Sanjay Ram ◽  
Somnath Pal

There are two approaches for classification of chemical reactions: Model-Driven and Data-Driven. In this paper, the authors develop an efficient algorithm based on a model-driven approach developed by Ugi and co-workers for classification of chemical reactions. The authors’ algorithm takes reaction matrix of a chemical reaction as input and generates its appropriate class as output. Reaction matrices being symmetric, matrix implementation of Ugi’s scheme using upper/lower tri-angular matrix is of O(n2) in terms of space complexity. Time complexity of similar matrix implementation is O(n4), both in worst case as well as in average case. The proposed algorithm uses two fixed size look-up tables in a novel way and requires constant space complexity. Time complexity both in worst and average cases of the algorithm is linear.


2018 ◽  
Vol 25 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Nodari Vakhania

AbstractThe computational complexity of an algorithm is traditionally measured for the worst and the average case. The worst-case estimation guarantees a certain worst-case behavior of a given algorithm, although it might be rough, since in “most instances” the algorithm may have a significantly better performance. The probabilistic average-case analysis claims to derive an average performance of an algorithm, say, for an “average instance” of the problem in question. That instance may be far away from the average of the problem instances arising in a given real-life application, and so the average case analysis would also provide a non-realistic estimation. We suggest that, in general, a wider use of probabilistic models for a more accurate estimation of the algorithm efficiency could be possible. For instance, the quality of the solutions delivered by an approximation algorithm may also be estimated in the “average” probabilistic case. Such an approach would deal with the estimation of the quality of the solutions delivered by the algorithm for the most common (for a given application) problem instances. As we illustrate, the probabilistic modeling can also be used to derive an accurate time complexity performance measure, distinct from the traditional probabilistic average-case time complexity measure. Such an approach could, in particular, be useful when the traditional average-case estimation is still rough or is not possible at all.


Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 362
Author(s):  
Priyanka Mukhopadhyay

In this work, we give provable sieving algorithms for the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) on lattices in ℓp norm (1≤p≤∞). The running time we obtain is better than existing provable sieving algorithms. We give a new linear sieving procedure that works for all ℓp norm (1≤p≤∞). The main idea is to divide the space into hypercubes such that each vector can be mapped efficiently to a sub-region. We achieve a time complexity of 22.751n+o(n), which is much less than the 23.849n+o(n) complexity of the previous best algorithm. We also introduce a mixed sieving procedure, where a point is mapped to a hypercube within a ball and then a quadratic sieve is performed within each hypercube. This improves the running time, especially in the ℓ2 norm, where we achieve a time complexity of 22.25n+o(n), while the List Sieve Birthday algorithm has a running time of 22.465n+o(n). We adopt our sieving techniques to approximation algorithms for SVP and CVP in ℓp norm (1≤p≤∞) and show that our algorithm has a running time of 22.001n+o(n), while previous algorithms have a time complexity of 23.169n+o(n).


1996 ◽  
Vol 05 (01n02) ◽  
pp. 127-141 ◽  
Author(s):  
MONINDER SINGH

One of the main factors limiting the use of path consistency algorithms in real life applications is their high space complexity. Han and Lee proposed a path consistency algorithm, PC-4, with O(n3a3) space complexity, which makes it practicable only for small problems. I present a new path consistency algorithm, PC-5, which has an O(n3a2) space complexity while retaining the worst-case time complexity of PC-4. Moreover, the new algorithm exhibits a much better average-case time complexity. The new algorithm is based on the idea (due to Bessiere) that, at any time, only a minimal amount of support has to be found and recorded for a labeling to establish its viability; one has to look for a new support only if the current support is eliminated. I also show that PC-5 can be improved further to yield an algorithm, PC5++, with even better average-case performance and the same space complexity.


Sign in / Sign up

Export Citation Format

Share Document