scholarly journals Online Clique Clustering

Algorithmica ◽  
2019 ◽  
Vol 82 (4) ◽  
pp. 938-965
Author(s):  
Marek Chrobak ◽  
Christoph Dürr ◽  
Aleksander Fabijan ◽  
Bengt J. Nilsson

Abstract Clique clustering is the problem of partitioning the vertices of a graph into disjoint clusters, where each cluster forms a clique in the graph, while optimizing some objective function. In online clustering, the input graph is given one vertex at a time, and any vertices that have previously been clustered together are not allowed to be separated. The goal is to maintain a clustering with an objective value close to the optimal solution. For the variant where we want to maximize the number of edges in the clusters, we propose an online algorithm based on the doubling technique. It has an asymptotic competitive ratio at most 15.646 and a strict competitive ratio at most 22.641. We also show that no deterministic algorithm can have an asymptotic competitive ratio better than 6. For the variant where we want to minimize the number of edges between clusters, we show that the deterministic competitive ratio of the problem is $$n-\omega (1)$$n-ω(1), where n is the number of vertices in the graph.

2018 ◽  
Vol 18 (04) ◽  
pp. 1850012
Author(s):  
YUPENG LI

In this paper, we study the problem of job dispatching and scheduling, where each job consists of a set of tasks. Each task is processed by a set of machines simultaneously. We consider two important performance metrics, the average job completion time (JCT), and the number of deadline-aware jobs that meet their deadlines. The goal is to minimize the former and maximize the latter. We first propose OneJ to minimize the job completion time (JCT) when there is exactly one single job in the system. Then, we propose an online algorithm called MultiJ, taking OneJ as a subroutine, to minimize the average JCT, and prove it has a good competitive ratio. We then derive another online algorithm QuickJ to maximize the number of jobs that can meet their deadlines. We show that QuickJ is competitive via a worst case analysis. We also conjecture that the competitive ratio of QuickJ is likely to be the best one that any deterministic algorithm can achieve. We also shed light on several important merits of MultiJ and QuickJ, such as no severe coordination overhead, scalability, work conservation, and no job starvation.


Author(s):  
Susanne Albers ◽  
Jens Quedenfeld

AbstractPower consumption is the major cost factor in data centers. It can be reduced by dynamically right-sizing the data center according to the currently arriving jobs. If there is a long period with low load, servers can be powered down to save energy. For identical machines, the problem has already been solved optimally by [25] and [1].In this paper, we study how a data-center with heterogeneous servers can dynamically be right-sized to minimize the energy consumption. There are d different server types with various operating and switching costs. We present a deterministic online algorithm that achieves a competitive ratio of 2d as well as a randomized version that is 1.58d-competitive. Furthermore, we show that there is no deterministic online algorithm that attains a competitive ratio smaller than 2d. Hence our deterministic algorithm is optimal. In contrast to related problems like convex body chasing and convex function chasing [17, 30], we investigate the discrete setting where the number of active servers must be an integral, so we gain truly feasible solutions.


2014 ◽  
Vol 25 (05) ◽  
pp. 525-536 ◽  
Author(s):  
NING DING ◽  
YAN LAN ◽  
XIN CHEN ◽  
GYÖRGY DÓSA ◽  
HE GUO ◽  
...  

In this paper we study an online minimum makespan scheduling problem with a reordering buffer. We obtain the following results: (i) for m > 51 identical machines, we give a 1.5-competitive online algorithm with a buffer of size ⌈1.5m⌉; (ii) for three identical machines, we give an optimal online algorithm with a buffer size six, better than the previous nine; (iii) for m uniform machines, using a buffer of size m, we improve the competitive ratio from 2 + ε to 2 − 1/m+ ε, where ε > 0 is sufficiently small and m is a constant.


2021 ◽  
Vol 104 (2) ◽  
pp. 003685042110162
Author(s):  
Qiang Sun ◽  
Shupei Liu

Emergency management is conceptualized as a complex, multi-objective optimization problem related to facility location. However, little research has been performed on the horizontal transportation of emergency logistics centres. This study makes contributions to the multi-objective locating abrupt disaster emergency logistics centres model with the smallest total cost and the largest customer satisfaction. The IABC algorithm is proposed in this paper to solve the multi-objective emergency logistics centres locating problem. IABC algorithm can effectively calculate the optimal location of abrupt disaster emergency logistics centres and the demand for relief materials, and it can solve the rescue time satisfaction for different rescue sites. (1) IABC has better global search capabilities to avoid premature convergence and provide a faster convergence speed, and it has optimal solution accuracy, solution diversity and robustness. (2) From the three optimal objective function values obtained, the optimal objective function values obtained by IABC algorithm are obviously better than ABC and GABC algorithms. (3) From the convergence curves of three objective functions the global search ability and the stability of IABC algorithm are better than those of ABC and GABC algorithm. The improved ABC algorithm has proven to be effective and feasible. However, emergency relief logistics systems are very complex and involve many factors, the proposed model needs to be refined further in the future.


2018 ◽  
Vol 29 (04) ◽  
pp. 505-527
Author(s):  
Maria Paola Bianchi ◽  
Hans-Joachim Böckenhauer ◽  
Tatjana Brülisauer ◽  
Dennis Komm ◽  
Beatrice Palano

In the online minimum spanning tree problem, a graph is revealed vertex by vertex; together with every vertex, all edges to vertices that are already known are given, and an online algorithm must irrevocably choose a subset of them as a part of its solution. The advice complexity of an online problem is a means to quantify the information that needs to be extracted from the input to achieve good results. For a graph of size [Formula: see text], we show an asymptotically tight bound of [Formula: see text] on the number of advice bits to produce an optimal solution for any given graph. For particular graph classes, e.g., with bounded degree or a restricted edge weight function, we prove that the upper bound can be drastically reduced; e.g., [Formula: see text] advice bits allow to compute an optimal result if the weight function equals the Euclidean distance; if the graph is complete and has two different edge weights, even a logarithmic number suffices. Some of these results make use of the optimality of Kruskal’s algorithm for the offline setting. We also study the trade-off between the number of advice bits and the achievable competitive ratio. To this end, we perform a reduction from another online problem to obtain a linear lower bound on the advice complexity for any near-optimal solution. Using our results finally allows us to give a lower bound on the expected competitive ratio of any randomized online algorithm for the problem, even on graphs with three different edge weights.


Author(s):  
Tiancheng Qin ◽  
S. Rasoul Etesami

We consider a generalization of the standard cache problem called file-bundle caching, where different queries (tasks), each containing l ≥ 1 files, sequentially arrive. An online algorithm that does not know the sequence of queries ahead of time must adaptively decide on what files to keep in the cache to incur the minimum number of cache misses. Here a cache miss refers to the case where at least one file in a query is missing among the cache files. In the special case where l = 1, this problem reduces to the standard cache problem. We first analyze the performance of the classic least recently used (LRU) algorithm in this setting and show that LRU is a near-optimal online deterministic algorithm for file-bundle caching with regard to competitive ratio. We then extend our results to a generalized ( h,k )-paging problem in this file-bundle setting, where the performance of the online algorithm with a cache size k is compared to an optimal offline benchmark of a smaller cache size h < k . In this latter case, we provide a randomized O ( l ln k / k-h )-competitive algorithm for our generalized ( h, k )-paging problem, which can be viewed as an extension of the classic marking algorithm . We complete this result by providing a matching lower bound for the competitive ratio, indicating that the performance of this modified marking algorithm is within a factor of 2 of any randomized online algorithm. Finally, we look at the distributed version of the file-bundle caching problem where there are m ≥ 1 identical caches in the system. In this case, we show that for m = l + 1 caches, there is a deterministic distributed caching algorithm that is ( l 2 + l )-competitive and a randomized distributed caching algorithm that is O ( l ln ( 2l + 1)-competitive when l ≥ 2. We also provide a general framework to devise other efficient algorithms for the distributed file-bundle caching problem and evaluate the performance of our results through simulations.


Algorithmica ◽  
2021 ◽  
Author(s):  
Matthias Englert ◽  
David Mezlaf ◽  
Matthias Westermann

AbstractIn the classic minimum makespan scheduling problem, we are given an input sequence of n jobs with sizes. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we allow the online algorithm to change the assignment of up to k jobs at the end for some limited number k. For m identical machines, Albers and Hellwig (Algorithmica 79(2):598–623, 2017) give tight bounds on the competitive ratio in this model. The precise ratio depends on, and increases with, m. It lies between 4/3 and $$\approx 1.4659$$ ≈ 1.4659 . They show that $$k = O(m)$$ k = O ( m ) is sufficient to achieve this bound and no $$k = o(n)$$ k = o ( n ) can result in a better bound. We study m uniform machines, i.e., machines with different speeds, and show that this setting is strictly harder. For sufficiently large m, there is a $$\delta = \varTheta (1)$$ δ = Θ ( 1 ) such that, for m machines with only two different machine speeds, no online algorithm can achieve a competitive ratio of less than $$1.4659 + \delta $$ 1.4659 + δ with $$k = o(n)$$ k = o ( n ) . We present a new algorithm for the uniform machine setting. Depending on the speeds of the machines, our scheduling algorithm achieves a competitive ratio that lies between 4/3 and $$\approx 1.7992$$ ≈ 1.7992 with $$k = O(m)$$ k = O ( m ) . We also show that $$k = \varOmega (m)$$ k = Ω ( m ) is necessary to achieve a competitive ratio below 2. Our algorithm is based on maintaining a specific imbalance with respect to the completion times of the machines, complemented by a bicriteria approximation algorithm that minimizes the makespan and maximizes the average completion time for certain sets of machines.


Author(s):  
Chenxi Li ◽  
Zhendong Guo ◽  
Liming Song ◽  
Jun Li ◽  
Zhenping Feng

The design of turbomachinery cascades is a typical high dimensional and computationally expensive problem, a metamodel-based global optimization and data mining method is proposed to solve it. A modified Efficient Global Optimization (EGO) algorithm named Multi-Point Search based Efficient Global Optimization (MSEGO) is proposed, which is characterized by adding multiple samples at per iteration. By testing on typical mathematical functions, MSEGO outperforms EGO in accuracy and convergence rate. MSEGO is used for the optimization of a turbine vane with non-axisymmetric endwall contouring (NEC), the total pressure coefficient of the optimal vane is increased by 0.499%. Under the same settings, another two optimization processes are conducted by using the EGO and an Adaptive Range Differential Evolution algorithm (ARDE), respectively. The optimal solution of MSEGO is far better than EGO. While achieving similar optimal solutions, the cost of MSEGO is only 3% of ARDE. Further, data mining techniques are used to extract information of design space and analyze the influence of variables on design performance. Through the analysis of variance (ANOVA), the variables of section profile are found to have most significant effects on cascade loss performance. However, the NEC seems not so important through the ANOVA analysis. This is due to the fact the performance difference between different NEC designs is very small in our prescribed space. However, the designs with NEC are always much better than the reference design as shown by parallel axis, i.e., the NEC would significantly influence the cascade performance. Further, it indicates that the ensemble learning by combing results of ANOVA and parallel axis is very useful to gain full knowledge from the design space.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3729 ◽  
Author(s):  
Shuai Wang ◽  
Hua-Yan Sun ◽  
Hui-Chao Guo ◽  
Lin Du ◽  
Tian-Jian Liu

Global registration is an important step in the three-dimensional reconstruction of multi-view laser point clouds for moving objects, but the severe noise, density variation, and overlap ratio between multi-view laser point clouds present significant challenges to global registration. In this paper, a multi-view laser point cloud global registration method based on low-rank sparse decomposition is proposed. Firstly, the spatial distribution features of point clouds were extracted by spatial rasterization to realize loop-closure detection, and the corresponding weight matrix was established according to the similarities of spatial distribution features. The accuracy of adjacent registration transformation was evaluated, and the robustness of low-rank sparse matrix decomposition was enhanced. Then, the objective function that satisfies the global optimization condition was constructed, which prevented the solution space compression generated by the column-orthogonal hypothesis of the matrix. The objective function was solved by the Augmented Lagrange method, and the iterative termination condition was designed according to the prior conditions of single-object global registration. The simulation analysis shows that the proposed method was robust with a wide range of parameters, and the accuracy of loop-closure detection was over 90%. When the pairwise registration error was below 0.1 rad, the proposed method performed better than the three compared methods, and the global registration accuracy was better than 0.05 rad. Finally, the global registration results of real point cloud experiments further proved the validity and stability of the proposed method.


2015 ◽  
Vol 6 (3) ◽  
pp. 55-60
Author(s):  
Pritibhushan Sinha

Abstract We consider the median solution of the Newsvendor Problem. Some properties of such a solution are shown through a theoretical analysis and a numerical experiment. Sometimes, though not often, median solution may be better than solutions maximizing expected profit, or maximizing minimum possible, over distribution with the same average and standard deviation, expected profit, according to some criteria. We discuss the practical suitability of the objective function set and the solution derived, for the Newsvendor Problem, and other such random optimization problems.


Sign in / Sign up

Export Citation Format

Share Document