scholarly journals Optimal Resilient Dynamic Dictionaries

2015 ◽  
Vol 9 (12) ◽  
Author(s):  
Gerth Stølting Brodal ◽  
Rolf Fagerberg ◽  
Allan Grønlund Jørgensen ◽  
Gabriel Moruz ◽  
Thomas Mølhave

<p>We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarily) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound delta on the number of corruptions and O(1) reliable memory cells are provided. In this model, we focus on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys. We first present a simple resilient dynamic search tree, based on random sampling, with O(log n + delta) expected amortized cost per operation, and O(n) space complexity. We then propose an optimal deterministic static dictionary supporting searches in Theta(log n + delta) time in the worst case, and we show how to use it in a dynamic setting in order to support updates in O(log n + delta) amortized time. Our dynamic dictionary also supports range queries in O(log n + delta + t) worst case time, where t is the size of the output. Finally, we show that every resilient search tree (with some reasonable properties) must take  Omega(log n + delta) worst-case time per search.</p><p> </p><p>Full text: <a href="http://dx.doi.org/10.1007/978-3-540-75520-3_32" target="_self">http://dx.doi.org/10.1007/978-3-540-75520-3_32</a></p>

2007 ◽  
Vol 36 (585) ◽  
Author(s):  
Gerth Stølting Brodal ◽  
Rolf Fagerberg ◽  
Allan Grønlund Jørgensen ◽  
Gabriel Moruz ◽  
Thomas Mølhave

Abstract. In the resilient memory model any memory cell can get cor- rupted at any time, and corrupted cells cannot be distinguished from uncorrupted cells. An upper bound, , on the number of corruptions and O(1) reliable memory cells are provided. In this model, a data structure is denoted resilient if it gives the correct output on the set of uncor- rupted elements. We propose two optimal resilient static dictionaries, a randomized one and a deterministic one. The randomized dictionary supports searches in O(log n + ) expected time using O(log ) random bits in the worst case, under the assumption that corruptions are not performed by an adaptive adversary. The deterministic static dictionary supports searches in O(log n + ) time in the worst case. We also in- troduce a deterministic dynamic resilient dictionary supporting searches in O(log n + ) time in the worst case, which is optimal, and updates in O(log n + ) amortized time. Our dynamic dictionary supports range queries in O(log n + + k) worst case time, where k is the size of the output.


2020 ◽  
Vol 26 (1) ◽  
pp. 1-16
Author(s):  
Kevin Vanslette ◽  
Abdullatif Al Alsheikh ◽  
Kamal Youcef-Toumi

AbstractWe motive and calculate Newton–Cotes quadrature integration variance and compare it directly with Monte Carlo (MC) integration variance. We find an equivalence between deterministic quadrature sampling and random MC sampling by noting that MC random sampling is statistically indistinguishable from a method that uses deterministic sampling on a randomly shuffled (permuted) function. We use this statistical equivalence to regularize the form of permissible Bayesian quadrature integration priors such that they are guaranteed to be objectively comparable with MC. This leads to the proof that simple quadrature methods have expected variances that are less than or equal to their corresponding theoretical MC integration variances. Separately, using Bayesian probability theory, we find that the theoretical standard deviations of the unbiased errors of simple Newton–Cotes composite quadrature integrations improve over their worst case errors by an extra dimension independent factor {\propto N^{-\frac{1}{2}}}. This dimension independent factor is validated in our simulations.


1994 ◽  
Vol 1 (2) ◽  
Author(s):  
Alexander E. Andreev

The complexity of a nondeterministic function is the minimum possible complexity of its determinisation. The entropy of a nondeterministic function, F, is minus the logarithm of the ratio between the number of determinisations of F and the number of all deterministic functions.<br /> <br />We obtain an upper bound on the complexity of a nondeterministic function with restricted entropy for the worst case.<br /> <br /> These bounds have strong applications in the problem of algorithm derandomization. A lot of randomized algorithms can be converted to deterministic ones if we have an effective hitting set with certain parameters (a set is hitting for a set system if it has a nonempty intersection with any set from the system).<br /> <br />Linial, Luby, Saks and Zuckerman (1993) constructed the best effective hitting set for the system of k-value, n-dimensional rectangles. The set size is polynomial in k log n / epsilon.<br /> <br />Our bounds of nondeterministic functions complexity offer a possibility to construct an effective hitting set for this system with almost linear size in k log n / epsilon.


2010 ◽  
Vol DMTCS Proceedings vol. AM,... (Proceedings) ◽  
Author(s):  
Thomas Fernique ◽  
Damien Regnault

International audience This paper introduces a Markov process inspired by the problem of quasicrystal growth. It acts over dimer tilings of the triangular grid by randomly performing local transformations, called $\textit{flips}$, which do not increase the number of identical adjacent tiles (this number can be thought as the tiling energy). Fixed-points of such a process play the role of quasicrystals. We are here interested in the worst-case expected number of flips to converge towards a fixed-point. Numerical experiments suggest a $\Theta (n^2)$ bound, where $n$ is the number of tiles of the tiling. We prove a $O(n^{2.5})$ upper bound and discuss the gap between this bound and the previous one. We also briefly discuss the average-case.


2007 ◽  
Vol 7 (1) ◽  
pp. 151-167 ◽  
Author(s):  
Dmitri B. Strukov ◽  
Konstantin K. Likharev

We have calculated the maximum useful bit density that may be achieved by the synergy of bad bit exclusion and advanced (BCH) error correcting codes in prospective crossbar nanoelectronic memories, as a function of defective memory cell fraction. While our calculations are based on a particular ("CMOL") memory topology, with naturally segmented nanowires and an area-distributed nano/CMOS interface, for realistic parameters our results are also applicable to "global" crossbar memories with peripheral interfaces. The results indicate that the crossbar memories with a nano/CMOS pitch ratio close to 1/3 (which is typical for the current, initial stage of the nanoelectronics development) may overcome purely semiconductor memories in useful bit density if the fraction of nanodevice defects (stuck-on-faults) is below ∼15%, even under rather tough, 30 ns upper bound on the total access time. Moreover, as the technology matures, and the pitch ratio approaches an order of magnitude, the crossbar memories may be far superior to the densest semiconductor memories by providing, e.g., a 1 Tbit/cm2 density even for a plausible defect fraction of 2%. These highly encouraging results are much better than those reported in literature earlier, including our own early work, mostly due to more advanced error correcting codes.


2014 ◽  
Vol 25 (06) ◽  
pp. 667-678 ◽  
Author(s):  
JUNPING ZHOU ◽  
WEIHUA SU ◽  
JIANAN WANG
Keyword(s):  

The counting exact satisfiablity problem (#XSAT) is a problem that computes the number of truth assignments satisfying only one literal in each clause. This paper presents an algorithm that solves the #XSAT problem in O(1.1995n), which is faster than the best algorithm running in O(1.2190n), where n denotes the number of variables. To increase the efficiency of the algorithm, a new principle, called common literals principle, is addressed to simplify formulae. This allows us to further eliminate literals. In addition, we firstly apply the resolution principles into solving #XSAT problem, and therefore it improves the efficiency of the algorithm further.


Sign in / Sign up

Export Citation Format

Share Document