An Ω (n log n) lower bound on the cost of mutual exclusion

Author(s):  
Rui Fan ◽  
Nancy Lynch
2021 ◽  
Vol 12 (3) ◽  
pp. 150-156
Author(s):  
A. V. Galatenko ◽  
◽  
V. A. Kuzovikhina ◽  

We propose an automata model of computer system security. A system is represented by a finite automaton with states partitioned into two subsets: "secure" and "insecure". System functioning is secure if the number of consecutive insecure states is not greater than some nonnegative integer k. This definition allows one to formally reflect responsiveness to security breaches. The number of all input sequences that preserve security for the given value of k is referred to as a k-secure language. We prove that if a language is k-secure for some natural and automaton V, then it is also k-secure for any 0 < k < k and some automaton V = V (k). Reduction of the value of k is performed at the cost of amplification of the number of states. On the other hand, for any non-negative integer k there exists a k-secure language that is not k"-secure for any natural k" > k. The problem of reconstruction of a k-secure language using a conditional experiment is split into two subcases. If the cardinality of an input alphabet is bound by some constant, then the order of Shannon function of experiment complexity is the same for al k; otherwise there emerges a lower bound of the order nk.


2020 ◽  
Vol 34 (03) ◽  
pp. 2327-2334
Author(s):  
Vidal Alcázar ◽  
Pat Riddle ◽  
Mike Barley

In the past few years, new very successful bidirectional heuristic search algorithms have been proposed. Their key novelty is a lower bound on the cost of a solution that includes information from the g values in both directions. Kaindl and Kainz (1997) proposed measuring how inaccurate a heuristic is while expanding nodes in the opposite direction, and using this information to raise the f value of the evaluated nodes. However, this comes with a set of disadvantages and remains yet to be exploited to its full potential. Additionally, Sadhukhan (2013) presented BAE∗, a bidirectional best-first search algorithm based on the accumulated heuristic inaccuracy along a path. However, no complete comparison in regards to other bidirectional algorithms has yet been done, neither theoretical nor empirical. In this paper we define individual bounds within the lower-bound framework and show how both Kaindl and Kainz's and Sadhukhan's methods can be generalized thus creating new bounds. This overcomes previous shortcomings and allows newer algorithms to benefit from these techniques as well. Experimental results show a substantial improvement, up to an order of magnitude in the number of necessarily-expanded nodes compared to state-of-the-art near-optimal algorithms in common benchmarks.


2019 ◽  
Vol 8 (2) ◽  
Author(s):  
Mikhail Goubko ◽  
Alexander Kuznetsov

Abstract The optimal connecting network problem generalizes many models of structure optimization known from the literature, including communication and transport network topology design, graph cut and graph clustering, etc. For the case of connecting trees with the given sequence of vertex degrees the cost of the optimal tree is shown to be bounded from below by the solution of a semidefinite optimization program with bilinear matrix inequality constraints, which is reduced to the solution of a series of convex programs with linear matrix inequality constraints. The proposed lower-bound estimate is used to construct several heuristic algorithms and to evaluate their quality on a variety of generated and real-life datasets.


Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 213 ◽  
Author(s):  
Yiğit Uğur ◽  
George Arvanitakis ◽  
Abdellatif Zaidi

In this paper, we develop an unsupervised generative clustering framework that combines the variational information bottleneck and the Gaussian mixture model. Specifically, in our approach, we use the variational information bottleneck method and model the latent space as a mixture of Gaussians. We derive a bound on the cost function of our model that generalizes the Evidence Lower Bound (ELBO) and provide a variational inference type algorithm that allows computing it. In the algorithm, the coders’ mappings are parametrized using neural networks, and the bound is approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on real datasets are provided to support the efficiency of our method.


2018 ◽  
Vol 10 (3) ◽  
pp. 1
Author(s):  
Louisa Kammerer ◽  
Miguel Ramirez

This paper examines the challenges firms (and policymakers) encounter when confronted by a recession at the zero lower bound, when traditional monetary policy is ineffective in the face of deteriorated balance sheets and high costs of credit. Within the larger body of literature, this paper focuses on the cost of credit during a recession, which constrains smaller firms from borrowing and investing, thus magnifying the contraction. Extending and revising a model originally developed by Walker (2010) and estimated by Pandey and Ramirez (2012), this study uses a Vector Error Correction Model with structural breaks to analyze the effects of relevant economic and financial factors on the cost of credit intermediation for small and large firms. Specifically, it tests whether large firms have advantageous access to credit, especially during recessions. The findings suggest that during the Great Recession of 2007-09 the cost of credit rose for small firms while it decreased for large firms, ceteris paribus. From the results, the paper assesses alternative ways in which the central bank can respond to a recession facing the zero lower bound.


2015 ◽  
Vol 41 (3) ◽  
pp. 355-383 ◽  
Author(s):  
Nelly Barbot ◽  
Olivier Boëffard ◽  
Jonathan Chevelu ◽  
Arnaud Delhay

Linguistic corpus design is a critical concern for building rich annotated corpora useful in different domains of applications. For example, speech technologies such as ASR (Automatic Speech Recognition) or TTS (Text-to-Speech) need a huge amount of speech data to train data-driven models or to produce synthetic speech. Collecting data is always related to costs (recording speech, verifying annotations, etc.), and as a rule of thumb, the more data you gather, the more costly your application will be. Within this context, we present in this article solutions to reduce the amount of linguistic text content while maintaining a sufficient level of linguistic richness required by a model or an application. This problem can be formalized as a Set Covering Problem (SCP) and we evaluate two algorithmic heuristics applied to design large text corpora in English and French for covering phonological information or POS labels. The first considered algorithm is a standard greedy solution with an agglomerative/spitting strategy and we propose a second algorithm based on Lagrangian relaxation. The latter approach provides a lower bound to the cost of each covering solution. This lower bound can be used as a metric to evaluate the quality of a reduced corpus whatever the algorithm applied. Experiments show that a suboptimal algorithm like a greedy algorithm achieves good results; the cost of its solutions is not so far from the lower bound (about 4.35% for 3-phoneme coverings). Usually, constraints in SCP are binary; we proposed here a generalization where the constraints on each covering feature can be multi-valued.


2017 ◽  
Vol 17 (1&2) ◽  
pp. 106-116
Author(s):  
Jop Briet ◽  
Jeroen Zuiddam

After Bob sends Alice a bit, she responds with a lengthy reply. At the cost of a factor of two in the total communication, Alice could just as well have given Bob her two possible replies at once without listening to him at all, and have him select which one applies. Motivated by a conjecture stating that this form of “round elimination” is impossible in exact quantum communication complexity, we study the orthogonal rank and a symmetric variant thereof for a certain family of Cayley graphs. The orthogonal rank of a graph is the smallest number d for which one can label each vertex with a nonzero d-dimensional complex vector such that adjacent vertices receive orthogonal vectors. We show an exp(n) lower bound on the orthogonal rank of the graph on {0, 1} n in which two strings are adjacent if they have Hamming distance at least n/2. In combination with previous work, this implies an affirmative answer to the above conjecture.


1993 ◽  
Vol 7 (1) ◽  
pp. 121-124 ◽  
Author(s):  
Julia Abrahams

The minimum expected number of binomial group tests is lower bounded by the cost of a particular Huffman coding problem whose solution is known. Thus, the information lower bound in binomial group testing is improved when the probability that each item is defective is small.


Sign in / Sign up

Export Citation Format

Share Document