scholarly journals Scheduling High Multiplicity Coupled Tasks

2020 ◽  
Vol 45 (1) ◽  
pp. 47-61
Author(s):  
Wojciech Wojciechowicz ◽  
Michaël Gabay

AbstractThe coupled tasks scheduling problem is class of scheduling problems, where each task consists of two operations and a separation gap between them. The high-multiplicity is a compact encoding, where identical tasks are grouped together, and the group is specified instead of each individual task. Consequently the encoding of a problem instance is decreased significantly. In this article we derive a lower bound for the problem variant as well as propose an asymptotically optimal algorithm. The theoretical results are complemented with computational experiment, where a new algorithm is compared with three other algorithms implemented.

Constraints ◽  
2021 ◽  
Author(s):  
Jana Koehler ◽  
Josef Bürgler ◽  
Urs Fontana ◽  
Etienne Fux ◽  
Florian Herzog ◽  
...  

AbstractCable trees are used in industrial products to transmit energy and information between different product parts. To this date, they are mostly assembled by humans and only few automated manufacturing solutions exist using complex robotic machines. For these machines, the wiring plan has to be translated into a wiring sequence of cable plugging operations to be followed by the machine. In this paper, we study and formalize the problem of deriving the optimal wiring sequence for a given layout of a cable tree. We summarize our investigations to model this cable tree wiring problem (CTW). as a traveling salesman problem with atomic, soft atomic, and disjunctive precedence constraints as well as tour-dependent edge costs such that it can be solved by state-of-the-art constraint programming (CP), Optimization Modulo Theories (OMT), and mixed-integer programming (MIP). solvers. It is further shown, how the CTW problem can be viewed as a soft version of the coupled tasks scheduling problem. We discuss various modeling variants for the problem, prove its NP-hardness, and empirically compare CP, OMT, and MIP solvers on a benchmark set of 278 instances. The complete benchmark set with all models and instance data is available on github and was included in the MiniZinc challenge 2020.


2015 ◽  
Vol 3 (1) ◽  
pp. 68-76
Author(s):  
Guiqing Liu ◽  
Kai Li ◽  
Bayi Cheng

AbstractThis paper considers several parallel machine scheduling problems with controllable processing times, in which the goal is to minimize the makespan. Preemption is allowed. The processing times of the jobs can be compressed by some extra resources. Three resource use models are considered. If the jobs are released at the same time, the problems under all the three models can be solved in a polynomial time. The authors give the polynomial algorithm. When the jobs are not released at the same time, if all the resources are given at time zero, or the remaining resources in the front stages can be used to the next stages, the offline problems can be solved in a polynomial time, but the online problems have no optimal algorithm. If the jobs have different release dates, and the remaining resources in the front stages can not be used in the next stages, both the offline and online problems can be solved in a polynomial time.


Author(s):  
Louis-Claude Canon ◽  
Aurélie Kong Win Chang ◽  
Yves Robert ◽  
Frédéric Vivien

This article discusses scheduling strategies for the problem of maximizing the expected number of tasks that can be executed on a cloud platform within a given budget and under a deadline constraint. The execution times of tasks follow independent and identically distributed probability laws. The main questions are how many processors to enroll and whether and when to interrupt tasks that have been executing for some time. We provide complexity results and an asymptotically optimal strategy for the problem instance with discrete probability distributions and without deadline. We extend the latter strategy for the general case with continuous distributions and a deadline and we design an efficient heuristic which is shown to outperform standard approaches when running simulations for a variety of useful distribution laws.


2020 ◽  
Author(s):  
Vadim Milyukov ◽  
Mikhail Vinogradov ◽  
Alexey Mironov ◽  
Andrey Myasnikov

<p>Traditionally, searching the Slichter mode (the longest-period mode of the Earth's free oscillations <sub>1</sub>S<sub>1</sub>) is based on the data of the superconducting gravimeters of the international GGP network. Currently this network is included in the International Geodynamics and Earth Tide Service (IGETS).</p><p>The sensitivity limit of the best superconducting gravimeters is about 1 nGal and not sufficient for direct observation of the Slichter mode even after the significant earthquakes. To reduce the detection threshold, the researchers used the “stacking” procedure — an joint data processing of the several instruments, but the different sensitivity level of the gravimeters prevents the achievement of maximum efficiency.</p><p>We have developed an asymptotically optimal algorithm based on the maximum likelihood method that takes into account the features of the Slichter mode and seismic noise. An important feature of the algorithm is its ability to evaluate the splitting parameter b which determines the distance between the side singlets of the triplet, simultaneously with the mode period T. The use of a non-linear inertial converter allows to take into account the non-Gaussian noise of real data. The use of the Neumann-Pearson criterion makes also possible to determine confidence level of detection: the false alarm probability and the correct detection probability, depending on the signal-to-noise ratio).</p><p>The algorithm was tested on synthetic data. A computer experiment has shown that the algorithm can detect the Slichter mode for a signal-to-noise ratio of 10<sup>-4</sup>. The algorithm was used to search the Slichter mode after the largest earthquakes based on the data of the IGETS network.</p><p>The results of the analysis are reported.</p><p>This work is supported by the Russian Foundation for Basic Research under Grant No Grant No 19-05-00341.</p>


2019 ◽  
Vol 22 (64) ◽  
pp. 123-134
Author(s):  
Mohamed Amine Nemmich ◽  
Fatima Debbat ◽  
Mohamed Slimane

In this paper, we propose a novel efficient model based on Bees Algorithm (BA) for the Resource-Constrained Project Scheduling Problem (RCPSP). The studied RCPSP is a NP-hard combinatorial optimization problem which involves resource, precedence, and temporal constraints. It has been applied to many applications. The main objective is to minimize the expected makespan of the project. The proposed model, named Enhanced Discrete Bees Algorithm (EDBA), iteratively solves the RCPSP by utilizing intelligent foraging behaviors of honey bees. The potential solution is represented by the multidimensional bee, where the activity list representation (AL) is considered. This projection involves using the Serial Schedule Generation Scheme (SSGS) as decoding procedure to construct the active schedules. In addition, the conventional local search of the basic BA is replaced by a neighboring technique, based on the swap operator, which takes into account the specificity of the solution space of project scheduling problems and reduces the number of parameters to be tuned. The proposed EDBA is tested on well-known benchmark problem instance sets from Project Scheduling Problem Library (PSPLIB) and compared with other approaches from the literature. The promising computational results reveal the effectiveness of the proposed approach for solving the RCPSP problems of various scales.


Author(s):  
Jing Yuan ◽  
Christian Geissler ◽  
Weijia Shao ◽  
Andreas Lommatzsch ◽  
Brijnesh Jain

Abstract Algorithm selection (AS) tasks are dedicated to find the optimal algorithm for an unseen problem instance. With the knowledge of problem instances’ meta-features and algorithms’ landmark performances, Machine Learning (ML) approaches are applied to solve AS problems. However, the standard training process of benchmark ML approaches in AS either needs to train the models specifically for every algorithm or relies on the sparse one-hot encoding as the algorithms’ representation. To escape these intermediate steps and form the mapping function directly, we borrow the learning to rank framework from Recommender System (RS) and embed the bi-linear factorization to model the algorithms’ performances in AS. This Bi-linear Learning to Rank (BLR) has proven to work with competence in some AS scenarios and thus is also proposed as a benchmark approach. Thinking from the evaluation perspective in the modern AS challenges, precisely predicting the performance is usually the measuring goal. Though approaches’ inference time also needs to be counted for the running time cost calculation, it’s always overlooked in the evaluation process. The multi-objective evaluation metric Adjusted Ratio of Root Ratios (A3R) is therefore advocated in this paper to balance the trade-off between the accuracy and inference time in AS. Concerning A3R, BLR outperforms other benchmarks when expanding the candidates range to TOP3. The better effect of this candidates expansion results from the cumulative optimum performance during the AS process. We take the further step in the experimentation to represent the advantage of such TOPK expansion, and illustrate that such expansion can be considered as the supplement for the convention of TOP1 selection during the evaluation process.


Sign in / Sign up

Export Citation Format

Share Document