scholarly journals Optimal Computer Crash Performance Precaution

2012 ◽  
Vol Vol. 14 no. 1 (Distributed Computing and...) ◽  
Author(s):  
Efraim Laksman ◽  
Hakan Lennerstad ◽  
Lars Lundberg

Distributed Computing and Networking International audience For a parallel computer system with m identical computers, we study optimal performance precaution for one possible computer crash. We want to calculate the cost of crash precaution in the case of no crash. We thus define a tolerance level r meaning that we only tolerate that the completion time of a parallel program after a crash is at most a factor r + 1 larger than if we use optimal allocation on m - 1 computers. This is an r-dependent restriction of the set of allocations of a program. Then, what is the worst-case ratio of the optimal r-dependent completion time in the case of no crash and the unrestricted optimal completion time of the same parallel program? We denote the maximal ratio of completion times f(r, m) - i.e., the ratio for worst-case programs. In the paper we establish upper and lower bounds of the worst-case cost function f (r, m) and characterize worst-case programs.

2010 ◽  
Vol DMTCS Proceedings vol. AL,... (Proceedings) ◽  
Author(s):  
Martin Kutrib ◽  
Jonas Lefèvre ◽  
Andreas Malcher

International audience We investigate the descriptional complexity of basic operations on real-time one-way cellular automata with an unbounded as well well as a fixed number of cells. The size of the automata is measured by their number of states. Most of the bounds shown are tight in the order of magnitude, that is, the sizes resulting from the effective constructions given are optimal with respect to worst case complexity. Conversely, these bounds also show the maximal savings of size that can be achieved when a given minimal real-time OCA is decomposed into smaller ones with respect to a given operation. From this point of view the natural problem of whether a decomposition can algorithmically be solved is studied. It turns out that all decomposition problems considered are algorithmically unsolvable. Therefore, a very restricted cellular model is studied in the second part of the paper, namely, real-time one-way cellular automata with a fixed number of cells. These devices are known to capture the regular languages and, thus, all the problems being undecidable for general one-way cellular automata become decidable. It is shown that these decision problems are $\textsf{NLOGSPACE}$-complete and thus share the attractive computational complexity of deterministic finite automata. Furthermore, the state complexity of basic operations for these devices is studied and upper and lower bounds are given.


2019 ◽  
Vol 29 (01) ◽  
pp. 49-72
Author(s):  
Mark de Berg ◽  
Tim Leijsen ◽  
Aleksandar Markovic ◽  
André van Renssen ◽  
Marcel Roeloffzen ◽  
...  

We introduce the fully-dynamic conflict-free coloring problem for a set [Formula: see text] of intervals in [Formula: see text] with respect to points, where the goal is to maintain a conflict-free coloring for [Formula: see text] under insertions and deletions. A coloring is conflict-free if for each point [Formula: see text] contained in some interval, [Formula: see text] is contained in an interval whose color is not shared with any other interval containing [Formula: see text]. We investigate trade-offs between the number of colors used and the number of intervals that are recolored upon insertion or deletion of an interval. Our results include: a lower bound on the number of recolorings as a function of the number of colors, which implies that with [Formula: see text] recolorings per update the worst-case number of colors is [Formula: see text], and that any strategy using [Formula: see text] colors needs [Formula: see text] recolorings; a coloring strategy that uses [Formula: see text] colors at the cost of [Formula: see text] recolorings, and another strategy that uses [Formula: see text] colors at the cost of [Formula: see text] recolorings; stronger upper and lower bounds for special cases. We also consider the kinetic setting where the intervals move continuously (but there are no insertions or deletions); here we show how to maintain a coloring with only four colors at the cost of three recolorings per event and show this is tight.


2020 ◽  
Vol 34 (02) ◽  
pp. 1766-1773
Author(s):  
Alessandro Aloisio ◽  
Michele Flammini ◽  
Cosimo Vinci

We consider a class of coalition formation games that can be succinctly represented by means of hypergraphs and properly generalizes symmetric additively separable hedonic games. More precisely, an instance of hypegraph hedonic game consists of a weighted hypergraph, in which each agent is associated to a distinct node and her utility for being in a given coalition is equal to the sum of the weights of all the hyperedges included in the coalition. We study the performance of stable outcomes in such games, investigating the degradation of their social welfare under two different metrics, the k-Nash price of anarchy and k-core price of anarchy, where k is the maximum size of a deviating coalition. Such prices are defined as the worst-case ratio between the optimal social welfare and the social welfare obtained when the agents reach an outcome satisfying the respective stability criteria. We provide asymptotically tight upper and lower bounds on the values of these metrics for several classes of hypergraph hedonic games, parametrized according to the integer k, the hypergraph arity r and the number of agents n. Furthermore, we show that the problem of computing the exact value of such prices for a given instance is computationally hard, even in case of non-negative hyperedge weights.


2001 ◽  
Vol Vol. 4 no. 2 ◽  
Author(s):  
Nir Namman ◽  
Raphaël Rom

International audience We investigate a scheduling problem in which packets, or datagrams, may be fragmented. While there are a few applications to scheduling with datagram fragmentation, our model of the problem is derived from a scheduling problem present in data over CATV networks. In the scheduling problem datagrams of variable lengths must be assigned (packed) into fixed length time slots. One of the capabilities of the system is the ability to break a datagram into several fragments. When a datagram is fragmented, extra bits are added to the original datagram to enable the reassembly of all the fragments. We convert the scheduling problem into the problem of bin packing with item fragmentation, which we define in the following way: we are asked to pack a list of items into a minimum number of unit capacity bins. Each item may be fragmented in which case overhead units are added to the size of every fragment. The cost associated with fragmentation renders the problem NP-hard, therefore an approximation algorithm is needed. We define a version of the well-known Next-Fit algorithm, capable of fragmenting items, and investigate its performance. We present both worst case and average case results and compare them to the case where fragmentation is not allowed.


2020 ◽  
Vol 34 (02) ◽  
pp. 2079-2086
Author(s):  
David Kempe

Distortion-based analysis has established itself as a fruitful framework for comparing voting mechanisms. m voters and n candidates are jointly embedded in an (unknown) metric space, and the voters submit rankings of candidates by non-decreasing distance from themselves. Based on the submitted rankings, the social choice rule chooses a winning candidate; the quality of the winner is the sum of the (unknown) distances to the voters. The rule's choice will in general be suboptimal, and the worst-case ratio between the cost of its chosen candidate and the optimal candidate is called the rule's distortion. It was shown in prior work that every deterministic rule has distortion at least 3, while the Copeland rule and related rules guarantee distortion at most 5; a very recent result gave a rule with distortion 2 + √5 ≈ 4.236.We provide a framework based on LP-duality and flow interpretations of the dual which provides a simpler and more unified way for proving upper bounds on the distortion of social choice rules. We illustrate the utility of this approach with three examples. First, we show that the Ranked Pairs and Schulze rules have distortion Θ(√n). Second, we give a fairly simple proof of a strong generalization of the upper bound of 5 on the distortion of Copeland, to social choice rules with short paths from the winning candidate to the optimal candidate in generalized weak preference graphs. A special case of this result recovers the recent 2 + √5 guarantee. Finally, our framework naturally suggests a combinatorial rule that is a strong candidate for achieving distortion 3, which had also been proposed in recent work. We prove that the distortion bound of 3 would follow from any of three combinatorial conjectures we formulate.


2016 ◽  
Vol 27 (07) ◽  
pp. 809-827
Author(s):  
David Caissy ◽  
Andrzej Pelc

We consider the problem of exploration of networks, some of whose edges are faulty. A mobile agent, situated at a starting node and unaware of which edges are faulty, has to explore the connected fault-free component of this node by visiting all of its nodes. The cost of the exploration is the number of edge traversals. For a given network and given starting node, the overhead of an exploration algorithm is the worst-case ratio (taken over all fault configurations) of its cost to the cost of an optimal algorithm which knows where faults are situated. An exploration algorithm, for a given network and given starting node, is called perfectly competitive if its overhead is the smallest among all exploration algorithms not knowing the location of faults. We design a perfectly competitive exploration algorithm for any ring, and show that, for networks modeled by hamiltonian graphs, the overhead of any DFS exploration is at most 10/9 times larger than that of a perfectly competitive algorithm. Moreover, for hamiltonian graphs of size at least 24, this overhead is less than 6% larger than that of a perfectly competitive algorithm.


2013 ◽  
Vol Vol. 15 no. 2 (Graph and Algorithms) ◽  
Author(s):  
Piotr Borowiecki ◽  
Dariusz Dereniowski

Graphs and Algorithms International audience A vertex ranking of a graph G is an assignment of positive integers (colors) to the vertices of G such that each path connecting two vertices of the same color contains a vertex of a higher color. Our main goal is to find a vertex ranking using as few colors as possible. Considering on-line algorithms for vertex ranking of split graphs, we prove that the worst case ratio of the number of colors used by any on-line ranking algorithm and the number of colors used in an optimal off-line solution may be arbitrarily large. This negative result motivates us to investigate semi on-line algorithms, where a split graph is presented on-line but its clique number is given in advance. We prove that there does not exist a (2-ɛ)-competitive semi on-line algorithm of this type. Finally, a 2-competitive semi on-line algorithm is given.


2020 ◽  
Vol 34 (02) ◽  
pp. 1894-1901
Author(s):  
Xujin Chen ◽  
Minming Li ◽  
Chenhao Wang

We study single-candidate voting embedded in a metric space, where both voters and candidates are points in the space, and the distances between voters and candidates specify the voters' preferences over candidates. In the voting, each voter is asked to submit her favorite candidate. Given the collection of favorite candidates, a mechanism for eliminating the least popular candidate finds a committee containing all candidates but the one to be eliminated. Each committee is associated with a social value that is the sum of the costs (utilities) it imposes (provides) to the voters. We design mechanisms for finding a committee to optimize the social value. We measure the quality of a mechanism by its distortion, defined as the worst-case ratio between the social value of the committee found by the mechanism and the optimal one. We establish new upper and lower bounds on the distortion of mechanisms in this single-candidate voting, for both general metrics and well-motivated special cases.


Biometrika ◽  
2021 ◽  
Author(s):  
Lorenzo Masoero ◽  
Federico Camerlenghi ◽  
Stefano Favaro ◽  
Tamara Broderick

Abstract While the cost of sequencing genomes has decreased dramatically in recent years, this expense often remains non-trivial. Under a fixed budget, scientists face a natural trade-off between quantity and quality: spending resources to sequence a greater number of genomes or spending resources to sequence genomes with increased accuracy. Our goal is to find the optimal allocation of resources between quantity and quality. Optimizing resource allocation promises to reveal as many new variations in the genome as possible. In this paper, we introduce a Bayesian nonparametric methodology to predict the number of new variants in a follow-up study based on a pilot study. When experimental conditions are kept constant between the pilot and follow-up, we find that our prediction is competitive with the best existing methods. Unlike current methods, though, our new method allows practitioners to change experimental conditions between the pilot and the follow-up. We demonstrate how this distinction allows our method to be used for more realistic predictions and for optimal allocation of a fixed budget between quality and quantity.


2014 ◽  
Vol 2014 ◽  
pp. 1-11
Author(s):  
Wei Zhou ◽  
Zilong Tan ◽  
Shaowen Yao ◽  
Shipu Wang

Resource location in structured P2P system has a critical influence on the system performance. Existing analytical studies of Chord protocol have shown some potential improvements in performance. In this paper a splay tree-based new Chord structure called SChord is proposed to improve the efficiency of locating resources. We consider a novel implementation of the Chord finger table (routing table) based on the splay tree. This approach extends the Chord finger table with additional routing entries. Adaptive routing algorithm is proposed for implementation, and it can be shown that hop count is significantly minimized without introducing any other protocol overheads. We analyze the hop count of the adaptive routing algorithm, as compared to Chord variants, and demonstrate sharp upper and lower bounds for both worst-case and average case settings. In addition, we theoretically analyze the hop reducing in SChord and derive the fact that SChord can significantly reduce the routing hops as compared to Chord. Several simulations are presented to evaluate the performance of the algorithm and support our analytical findings. The simulation results show the efficiency of SChord.


Sign in / Sign up

Export Citation Format

Share Document