Benchmark of Bitrate Adaptation in Video Streaming

2021 ◽  
Vol 13 (4) ◽  
pp. 1-24
Author(s):  
Jessica Chen ◽  
Henry Milner ◽  
Ion Stoica ◽  
Jibin Zhan

The HTTP adaptive streaming technique opened the door to cope with the fluctuating network conditions during the streaming process by dynamically adjusting the volume of the future chunks to be downloaded. The bitrate selection in this adjustment inevitably involves the task of predicting the future throughput of a video session, owing to which various heuristic solutions have been explored. The ultimate goal of the present work is to explore the theoretical upper bounds of the QoE that any ABR algorithm can possibly reach, therefore providing an essential step to benchmarking the performance evaluation of ABR algorithms. In our setting, the QoE is defined in terms of a linear combination of the average perceptual quality and the buffering ratio. The optimization problem is proven to be NP-hard when the perceptual quality is defined by chunk size and conditions are given under which the problem becomes polynomially solvable. Enriched by a global lower bound, a pseudo-polynomial time algorithm along the dynamic programming approach is presented. When the minimum buffering is given higher priority over higher perceptual quality, the problem is shown to be also NP-hard, and the above algorithm is simplified and enhanced by a sequence of lower bounds on the completion time of chunk downloading, which, according to our experiment, brings a 36.0% performance improvement in terms of computation time. To handle large amounts of data more efficiently, a polynomial-time algorithm is also introduced to approximate the optimal values when minimum buffering is prioritized. Besides its performance guarantee, this algorithm is shown to reach 99.938% close to the optimal results, while taking only 0.024% of the computation time compared to the exact algorithm in dynamic programming.

10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


2012 ◽  
Vol 2012 ◽  
pp. 1-10
Author(s):  
Romeo Rizzi ◽  
Luca Nardin

The Interactive Knapsacks Heuristic Optimization (IKHO) problem is a particular knapsacks model in which, given an array of knapsacks, every insertion in a knapsack affects also the other knapsacks, in terms of weight and profit. The IKHO model was introduced by Isto Aho to model instances of the load clipping problem. The IKHO problem is known to be APX-hard and, motivated by this negative fact, Aho exhibited a few classes of polynomial instances for the IKHO problem. These instances were obtained by limiting the ranges of two structural parameters, c and u, which describe the extent to which an insertion in a knapsack in uences the nearby knapsacks. We identify a new and broad class of instances allowing for a polynomial time algorithm. More precisely, we show that the restriction of IKHO to instances where is bounded by a constant can be solved in polynomial time, using dynamic programming.


Author(s):  
Yangjun Chen ◽  
◽  
Dunren Che ◽  

In this paper, we present a polynomial-time algorithm for TPQ (tree pattern queries) minimization without XML constraints involved. The main idea of the algorithm is a dynamic programming strategy to find all the matching subtrees within a TPQ. A matching subtree implies a redundancy and should be removed in such a way that the semantics of the original TPQ is not damaged. Our algorithm consists of two parts: one for subtree recognization and the other for subtree deletion. Both of them needs only O(<I>n</I>2) time, where <I>n</I> is the number of nodes in a TPQ.


2014 ◽  
Vol 24 (03) ◽  
pp. 225-236 ◽  
Author(s):  
DAVID KIRKPATRICK ◽  
BOTING YANG ◽  
SANDRA ZILLES

Given an arrangement A of n sensors and two points s and t in the plane, the barrier resilience of A with respect to s and t is the minimum number of sensors whose removal permits a path from s to t such that the path does not intersect the coverage region of any sensor in A. When the surveillance domain is the entire plane and sensor coverage regions are unit line segments, even with restricted orientations, the problem of determining the barrier resilience is known to be NP-hard. On the other hand, if sensor coverage regions are arbitrary lines, the problem has a trivial linear time solution. In this paper, we study the case where each sensor coverage region is an arbitrary ray, and give an O(n2m) time algorithm for computing the barrier resilience when there are m ⩾ 1 sensor intersections.


2020 ◽  
Vol 34 (02) ◽  
pp. 2070-2078
Author(s):  
Yasushi Kawase ◽  
Hanna Sumita

We study the problem of fairly allocating a set of indivisible goods to risk-neutral agents in a stochastic setting. We propose an (approximation) algorithm to find a stochastic allocation that maximizes the minimum utility among the agents. The algorithm runs by repeatedly finding an (approximate) allocation to maximize the total virtual utility of the agents. This implies that the problem is solvable in polynomial time when the utilities are gross-substitutes (which is a subclass of submodular). When the utilities are submodular, we can find a (1 − 1/e)-approximate solution for the problem and this is best possible unless P=NP. We also extend the problem where a stochastic allocation must satisfy the (ex ante) envy-freeness. Under this condition, we demonstrate that the problem is NP-hard even when every agent has an additive utility with a matroid constraint (which is a subclass of gross-substitutes). Furthermore, we propose a polynomial-time algorithm for the setting with a restriction that the matroid constraint is common to all agents.


2007 ◽  
Vol Vol. 9 no. 1 (Graph and Algorithms) ◽  
Author(s):  
Jan Kára ◽  
Jan Kratochvil ◽  
David R. Wood

Graphs and Algorithms International audience We consider the problem of finding a balanced ordering of the vertices of a graph. More precisely, we want to minimise the sum, taken over all vertices v, of the difference between the number of neighbours to the left and right of v. This problem, which has applications in graph drawing, was recently introduced by Biedl et al. [Discrete Applied Math. 148:27―48, 2005]. They proved that the problem is solvable in polynomial time for graphs with maximum degree three, but NP-hard for graphs with maximum degree six. One of our main results is to close the gap in these results, by proving NP-hardness for graphs with maximum degree four. Furthermore, we prove that the problem remains NP-hard for planar graphs with maximum degree four and for 5-regular graphs. On the other hand, we introduce a polynomial time algorithm that determines whetherthere is a vertex ordering with total imbalance smaller than a fixed constant, and a polynomial time algorithm that determines whether a given multigraph with even degrees has an 'almost balanced' ordering.


1996 ◽  
Vol 07 (01) ◽  
pp. 23-41
Author(s):  
MARTIN FÜRER ◽  
WEBB MILLER

An alignment of k given sequences is a k-rowed matrix frequently used by molecular biologists to display correspondences between entries from each sequence. Under one approach, an alignment is represented by a matrix of ‘x’ and ’-’ characters, where each x in row r indicates the position of an entry of sequence r. It is sometimes efficient to store only the run-length encoding of each row of this bit-matrix. A natural class of commands for editing one such row into another consists of operations of the form: “Move the d dashes that begin at position i of row r to position j of that row,” for relevant values of r, d, i and j. We show that the problem of determining a shortest sequence of such operations that converts one given alignment to another is NP-hard and give a polynomial-time algorithm that always comes within a factor 5/4 of optimality. An application of these ideas to alignments of long DNA sequences is discussed.


2014 ◽  
Vol 575 ◽  
pp. 926-930
Author(s):  
Shu Xia Zhang ◽  
Yu Zhong Zhang

In this paper, we address the scheduling model with discretely compressible processing times, where processing any job with a compressed processing time incurs a corresponding compression cost. We consider the following problem: scheduling with discretely compressible processing times to minimize makespan with the constraint of total compression cost on identical parallel machines. Jobs may have simultaneous release times. We design a pseudo-polynomial time algorithm by approach of dynamic programming and an FPTAS.


2019 ◽  
Vol 28 (1) ◽  
pp. 1-13
Author(s):  
Abra Brisbin ◽  
Manda Riehl ◽  
Noah Williams

Abstract Permutations are frequently used in solving the genome rearrangement problem, whose goal is finding the shortest sequence of mutations transforming one genome into another. We introduce the Deletion-Insertion model (DI) to model small-scale mutations in species with linear chromosomes, such as humans. Applying one restriction to this model, we obtain the transposition model for genome rearrangement, which was shown to be NP-hard in [4]. We use combinatorial reasoning and permutation statistics to develop a polynomial-time algorithm to approximate the minimum number of transpositions required in the transposition model and to analyze the sharpness of several bounds on transpositions between genomes.


Sign in / Sign up

Export Citation Format

Share Document