scholarly journals Maintaining an EDCS in General Graphs: Simpler, Density-Sensitive and with Worst-Case Time Bounds

2022 ◽  
pp. 12-23
Author(s):  
Fabrizio Grandoni ◽  
Chris Schwiegelshohn ◽  
Shay Solomon ◽  
Amitai Uzrad
2002 ◽  
Vol 45 (2) ◽  
pp. 192-201 ◽  
Author(s):  
Tomás Feder ◽  
Rajeev Motwani

Author(s):  
Tsvi Kopelowitz ◽  
Robert Krauthgamer ◽  
Ely Porat ◽  
Shay Solomon

2021 ◽  
Author(s):  
Jinghao Sun ◽  
Nan Guan ◽  
Zhishan Guo ◽  
Yekai Xue ◽  
Jing He ◽  
...  

2018 ◽  
Vol 28 (03) ◽  
pp. 289-307 ◽  
Author(s):  
Sándor P. Fekete ◽  
Phillip Keldenich

A conflict-free[Formula: see text]-coloring of a graph [Formula: see text] assigns one of [Formula: see text] different colors to some of the vertices such that, for every vertex [Formula: see text], there is a color that is assigned to exactly one vertex among [Formula: see text] and [Formula: see text]’s neighbors. Such colorings have applications in wireless networking, robotics, and geometry, and are well studied in graph theory. Here we study the conflict-free coloring of geometric intersection graphs. We demonstrate that the intersection graph of [Formula: see text] geometric objects without fatness properties and size restrictions may have conflict-free chromatic number in [Formula: see text] and in [Formula: see text] for disks or squares of different sizes; it is known for general graphs that the worst case is in [Formula: see text]. For unit-disk intersection graphs, we prove that it is NP-complete to decide the existence of a conflict-free coloring with one color; we also show that six colors always suffice, using an algorithm that colors unit disk graphs of restricted height with two colors. We conjecture that four colors are sufficient, which we prove for unit squares instead of unit disks. For interval graphs, we establish a tight worst-case bound of two.


2011 ◽  
Vol 22 (04) ◽  
pp. 945-969
Author(s):  
GONZALO NAVARRO ◽  
RODRIGO PAREDES ◽  
PATRICIO V. POBLETE ◽  
PETER SANDERS

The Quickheap (QH) is a recent data structure for implementing priority queues which has proved to be simple and efficient in practice. It has also been shown to offer logarithmic expected amortized complexity for all of its operations. Yet, this complexity holds only when keys inserted and deleted are uniformly distributed over the current set of keys. This assumption is in many cases difficult to verify, and does not hold in some important applications such as implementing some minimum spanning tree algorithms using priority queues. In this paper we introduce an elegant model called a Leftmost Skeleton Tree (LST) that reveals the connection between QHs and randomized binary search trees, and allows us to define Randomized QHs. We prove that these offer logarithmic expected amortized complexity for all operations regardless of the input distribution. We also use LSTs in connection to α-balanced trees to achieve a practical α-Balanced QH that offers worst-case amortized logarithmic time bounds for all the operations. Both variants are much more robust than the original QHs. We show experimentally that randomized QHs behave almost as efficiently as QHs on random inputs, and that they retain their good performance on inputs where that of QHs degrades.


1995 ◽  
Vol 2 (12) ◽  
Author(s):  
Gerth Stølting Brodal

We present priority queues that support the operations MakeQueue,<br />FindMin, Insert and Meld in worst case time O(1) and Delete and<br />DeleteMin in worst case time O(log n). They can be implemented on the<br />pointer machine and require linear space. The time bounds are optimal for<br />all implementations where Meld takes worst case time o(n).<br />To our knowledge this is the first priority queue implementation that<br />supports Meld in worst case constant time and DeleteMin in logarithmic<br />time.


2018 ◽  
Vol 7 (3.3) ◽  
pp. 252
Author(s):  
Mood Venkanna ◽  
Rameshwar Rao ◽  
P Chandra Sekhar

Industrial requires hard real-time systems for safety and critical applications like automotive, Aeronautics, manufacturing control and train industries. Hard Real-Time Systems’ embedded controllers are with expectation of complete the tasks within a certain time bounds reliably including task scheduling. The estimation of upper bound limits corresponding to the execution times is often termed as the Worst-Case Execution Times (WCETs). It is an essential step in developing and validating the hard real-time systems. Particularly, the upper bounds need to satisfy these constraints related to the execution times. However, it is often not feasible many times to set upper bounds on execution times for programs. In present work, the problem of choosing reconfigurable Custom Instructions (CIs) is accomplished by optimizing the WCET corresponding to an application. This issue is designed using Particle Swarm Optimization (PSO) based program for a path analysis. The work emphasizes on the effectiveness of optimizing the WCET when applied to a reconfigurable processor. It evaluates a compound application of multimedia with a host of reconfigurable CIs corresponding to a number of hardware parameters.  


1996 ◽  
Vol 06 (03) ◽  
pp. 309-332 ◽  
Author(s):  
JOSEPH S.B. MITCHELL

We give a subquadratic (O(n3/2+∊) time and O(n) space) algorithm for computing Euclidean shortest paths in the plane in the presence of polygonal obstacles; previous time bounds were at least quadratic in n, in the worst case. The method avoids use of visibility graphs, relying instead on the continuous Dijkstra paradigm. The output is a shortest path map (of size O(n)) with respect to a given source point, which allows shortest path length queries to be answered in time O( log n). The algorithm extends to the case of multiple source points, yielding a method to compute a Voronoi diagram with respect to the shortest path metric.


Author(s):  
Topi Talvitie ◽  
Teppo Niinimäki ◽  
Mikko Koivisto

We investigate almost uniform sampling from the set of linear extensions of a given partial order. The most efficient schemes stem from Markov chains whose mixing time bounds are polynomial, yet impractically large. We show that, on instances one encounters in practice, the actual mixing times can be much smaller than the worst-case bounds, and particularly so for a novel Markov chain we put forward. We circumvent the inherent hardness of estimating standard mixing times by introducing a refined notion, which admits estimation for moderate-size partial orders. Our empirical results suggest that the Markov chain approach to sample linear extensions can be made to scale well in practice, provided that the actual mixing times can be realized by instance-sensitive upper bounds or termination rules. Examples of the latter include existing perfect simulation algorithms, whose running times in our experiments follow the actual mixing times of certain chains, albeit with significant overhead.


Sign in / Sign up

Export Citation Format

Share Document