On the role of shared entanglement

2008 ◽  
Vol 8 (1&2) ◽  
pp. 82-95
Author(s):  
D. Gavinsky

Despite the apparent similarity between shared randomness and shared entanglement in the context of Communication Complexity, our understanding of the latter is not as good as of the former. In particular, there is no known ``entanglement analogue'' for the famous theorem by Newman, saying that the number of shared random bits required for solving any communication problem can be at most logarithmic in the input length (i.e., using more than $\asO[]{\log n}$ shared random bits would not reduce the complexity of an optimal solution). In this paper we prove that the same is not true for entanglement. We establish a wide range of tight (up to a polylogarithmic factor) entanglement vs.\ communication trade-offs for relational problems. The low end is:\ for any $t>2$, reducing shared entanglement from $log^tn$ to $\aso[]{log^{t-2}n}$ qubits can increase the communication required for solving a problem almost exponentially, from $\asO[]{log^tn}$ to $\asOm[]{\sqrt n}$. The high end is:\ for any $\eps>0$, reducing shared entanglement from $n^{1-\eps}\log n$ to $\aso[]{n^{1-\eps}/\log n}$ can increase the required communication from $\asO[]{n^{1-\eps}\log n}$ to $\asOm[]{n^{1-\eps/2}/\log n}$. The upper bounds are demonstrated via protocols which are \e{exact} and work in the \e{simultaneous message passing model}, while the lower bounds hold for \e{bounded-error protocols}, even in the more powerful \e{model of 1-way communication}. Our protocols use shared EPR pairs while the lower bounds apply to any sort of prior entanglement. We base the lower bounds on a strong direct product theorem for communication complexity of a certain class of relational problems. We believe that the theorem might have applications outside the scope of this work.

1998 ◽  
Vol 5 (11) ◽  
Author(s):  
Gudmund Skovbjerg Frandsen ◽  
Johan P. Hansen ◽  
Peter Bro Miltersen

We consider dynamic evaluation of algebraic functions (matrix multiplication, determinant, convolution, Fourier transform, etc.) in the model of Reif and Tate; i.e., if f(x1, . . . , xn) = (y1, . . . , ym) is an algebraic problem, we consider serving on-line requests of the form "change input xi to value v" or "what is the value of output yi?". We present techniques for showing lower bounds on the worst case time complexity per operation for such problems. The first gives lower bounds in a wide range of rather powerful models (for instance history dependent<br />algebraic computation trees over any infinite subset of a field, the integer RAM, and the generalized real RAM model of Ben-Amram and Galil). Using this technique, we show optimal  Omega(n) bounds for dynamic matrix-vector product, dynamic matrix multiplication and dynamic discriminant and an <br />Omega(sqrt(n)) lower bound for dynamic polynomial multiplication (convolution), providing a good match with Reif and<br />Tate's O(sqrt(n log n)) upper bound. We also show linear lower bounds for dynamic determinant, matrix adjoint and matrix inverse and an Omega(sqrt(n)) lower bound for the elementary symmetric functions. The second technique is the communication complexity technique of Miltersen, Nisan, Safra, and Wigderson which we apply to the setting<br />of dynamic algebraic problems, obtaining similar lower bounds in the word RAM model. The third technique gives lower bounds in the weaker straight line program model. Using this technique, we show an ((log n)2= log log n) lower bound for dynamic discrete Fourier transform. Technical ingredients of our techniques are the incompressibility technique of Ben-Amram and Galil and the lower bound for depth-two superconcentrators of Radhakrishnan and Ta-Shma. The incompressibility technique is extended to arithmetic computation in arbitrary fields.


2021 ◽  
Vol 8 (2) ◽  
pp. 1-28
Author(s):  
Gopal Pandurangan ◽  
Peter Robinson ◽  
Michele Scquizzato

Motivated by the increasing need to understand the distributed algorithmic foundations of large-scale graph computations, we study some fundamental graph problems in a message-passing model for distributed computing where k ≥ 2 machines jointly perform computations on graphs with n nodes (typically, n >> k). The input graph is assumed to be initially randomly partitioned among the k machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. Our main contribution is the General Lower Bound Theorem , a theorem that can be used to show non-trivial lower bounds on the round complexity of distributed large-scale data computations. This result is established via an information-theoretic approach that relates the round complexity to the minimal amount of information required by machines to solve the problem. Our approach is generic, and this theorem can be used in a “cookbook” fashion to show distributed lower bounds for several problems, including non-graph problems. We present two applications by showing (almost) tight lower bounds on the round complexity of two fundamental graph problems, namely, PageRank computation and triangle enumeration . These applications show that our approach can yield lower bounds for problems where the application of communication complexity techniques seems not obvious or gives weak bounds, including and especially under a stochastic partition of the input. We then present distributed algorithms for PageRank and triangle enumeration with a round complexity that (almost) matches the respective lower bounds; these algorithms exhibit a round complexity that scales superlinearly in k , improving significantly over previous results [Klauck et al., SODA 2015]. Specifically, we show the following results: PageRank: We show a lower bound of Ὼ(n/k 2 ) rounds and present a distributed algorithm that computes an approximation of the PageRank of all the nodes of a graph in Õ(n/k 2 ) rounds. Triangle enumeration: We show that there exist graphs with m edges where any distributed algorithm requires Ὼ(m/k 5/3 ) rounds. This result also implies the first non-trivial lower bound of Ὼ(n 1/3 ) rounds for the congested clique model, which is tight up to logarithmic factors. We then present a distributed algorithm that enumerates all the triangles of a graph in Õ(m/k 5/3 + n/k 4/3 ) rounds.


2010 ◽  
Vol 21 (04) ◽  
pp. 479-493
Author(s):  
ANIL ADA

In this paper we study the non-deterministic communication complexity of regular languages. We show that a regular language has either constant or at least logarithmic non-deterministic communication complexity. We prove several linear lower bounds which we know cover a wide range of regular languages with linear complexity. Furthermore we find evidence that previous techniques (Tesson and Thérien 2005) for proving linear lower bounds, for instance in deterministic and probabilistic models, do not work in the non-deterministic setting.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
George Gillard ◽  
Ian M. Griffiths ◽  
Gautham Ragunathan ◽  
Ata Ulhaq ◽  
Callum McEwan ◽  
...  

AbstractCombining external control with long spin lifetime and coherence is a key challenge for solid state spin qubits. Tunnel coupling with electron Fermi reservoir provides robust charge state control in semiconductor quantum dots, but results in undesired relaxation of electron and nuclear spins through mechanisms that lack complete understanding. Here, we unravel the contributions of tunnelling-assisted and phonon-assisted spin relaxation mechanisms by systematically adjusting the tunnelling coupling in a wide range, including the limit of an isolated quantum dot. These experiments reveal fundamental limits and trade-offs of quantum dot spin dynamics: while reduced tunnelling can be used to achieve electron spin qubit lifetimes exceeding 1 s, the optical spin initialisation fidelity is reduced below 80%, limited by Auger recombination. Comprehensive understanding of electron-nuclear spin relaxation attained here provides a roadmap for design of the optimal operating conditions in quantum dot spin qubits.


2021 ◽  
Vol 11 (13) ◽  
pp. 5859
Author(s):  
Fernando N. Santos-Navarro ◽  
Yadira Boada ◽  
Alejandro Vignoni ◽  
Jesús Picó

Optimal gene expression is central for the development of both bacterial expression systems for heterologous protein production, and microbial cell factories for industrial metabolite production. Our goal is to fulfill industry-level overproduction demands optimally, as measured by the following key performance metrics: titer, productivity rate, and yield (TRY). Here we use a multiscale model incorporating the dynamics of (i) the cell population in the bioreactor, (ii) the substrate uptake and (iii) the interaction between the cell host and expression of the protein of interest. Our model predicts cell growth rate and cell mass distribution between enzymes of interest and host enzymes as a function of substrate uptake and the following main lab-accessible gene expression-related characteristics: promoter strength, gene copy number and ribosome binding site strength. We evaluated the differential roles of gene transcription and translation in shaping TRY trade-offs for a wide range of expression levels and the sensitivity of the TRY space to variations in substrate availability. Our results show that, at low expression levels, gene transcription mainly defined TRY, and gene translation had a limited effect; whereas, at high expression levels, TRY depended on the product of both, in agreement with experiments in the literature.


2021 ◽  
Vol 11 (13) ◽  
pp. 5934
Author(s):  
Georgios Papaioannou ◽  
Jenny Jerrelind ◽  
Lars Drugge

Effective emission control technologies and novel propulsion systems have been developed for road vehicles, decreasing exhaust particle emissions. However, work has to be done on non-exhaust traffic related sources such as tyre–road interaction and tyre wear. Given that both are inevitable in road vehicles, efforts for assessing and minimising tyre wear should be considered. The amount of tyre wear is because of internal (tyre structure, manufacturing, etc.) and external (suspension configuration, speed, road surface, etc.) factors. In this work, the emphasis is on the optimisation of such parameters for minimising tyre wear, but also enhancing occupant’s comfort and improving vehicle handling. In addition to the search for the optimum parameters, the optimisation is also used as a tool to identify and highlight potential trade-offs between the objectives and the various design parameters. Hence, initially, the tyre design (based on some chosen tyre parameters) is optimised with regards to the above-mentioned objectives, for a vehicle while cornering over both Class A and B road roughness profiles. Afterwards, an optimal solution is sought between the Pareto alternatives provided by the two road cases, in order for the tyre wear levels to be less affected under different road profiles. Therefore, it is required that the tyre parameters are as close possible and that they provide similar tyre wear in both road cases. Then, the identified tyre design is adopted and the optimum suspension design is sought for the two road cases for both passive and semi-active suspension types. From the results, significant conclusions regarding how tyre wear behaves with regards to passenger comfort and vehicle handling are extracted, while the results illustrate where the optimum suspension and tyre parameters have converged trying to compromise among the above objectives under different road types and how suspension types, passive and semi-active, could compromise among all of them more optimally.


Author(s):  
Ruiyang Song ◽  
Kuang Xu

We propose and analyze a temporal concatenation heuristic for solving large-scale finite-horizon Markov decision processes (MDP), which divides the MDP into smaller sub-problems along the time horizon and generates an overall solution by simply concatenating the optimal solutions from these sub-problems. As a “black box” architecture, temporal concatenation works with a wide range of existing MDP algorithms. Our main results characterize the regret of temporal concatenation compared to the optimal solution. We provide upper bounds for general MDP instances, as well as a family of MDP instances in which the upper bounds are shown to be tight. Together, our results demonstrate temporal concatenation's potential of substantial speed-up at the expense of some performance degradation.


Sign in / Sign up

Export Citation Format

Share Document