scholarly journals On Verifying and Maintaining Connectivity of Interval Temporal Networks

2019 ◽  
Vol 29 (02) ◽  
pp. 1950009 ◽  
Author(s):  
Eleni C. Akrida ◽  
Paul G. Spirakis

An interval temporal network is, informally speaking, a network whose links change with time. The term interval means that a link may exist for one or more time intervals, called availability intervals of the link, after which it does not exist (until, maybe, a further moment in time when it starts being available again). In this model, we consider continuous time and high-speed (instantaneous) information dissemination. An interval temporal network is connected during a period of time [Formula: see text], if it is connected for all time instances [Formula: see text] (instantaneous connectivity). In this work, we study instantaneous connectivity issues of interval temporal networks. We provide a polynomial-time algorithm that answers if a given interval temporal network is connected during a time period. If the network is not connected throughout the given time period, then we also give a polynomial-time algorithm that returns large components of the network that remain connected and remain large during [Formula: see text]; the algorithm also considers the components of the network that start as large at time [Formula: see text] but dis-connect into small components within the time interval [Formula: see text], and answers how long after time [Formula: see text] these components stay connected and large. Finally, we examine a case of interval temporal networks on tree graphs where the lifetimes of links and, thus, the failures in the connectivity of the network are not controlled by us; however, we can “feed” the network with extra edges that may re-connect it into a tree when a failure happens, so that its connectivity is maintained during a time period. We show that we can with high probability maintain the connectivity of the network for a long time period by making these extra edges available for re-connection using a randomized approach. Our approach also saves some cost in the design of availabilities of the edges; here, the cost is the sum, over all extra edges, of the length of their availability-to-reconnect interval.

2007 ◽  
Vol 18 (02) ◽  
pp. 341-359 ◽  
Author(s):  
JOSEPH Y.-T. LEUNG ◽  
HAIBING LI ◽  
HAIRONG ZHAO

We consider two-machine flow shop problems with exact delays. In this model, there are two machines, the upstream machine and the downstream machine. Each job j has two operations: the first operation has to be processed on the upstream machine and the second operation has to be processed on the downstream machine, subject to the constraint that the time interval between the completion time of the first operation and the start time of the second operation is exactly [Formula: see text]. We concentrate on the objectives of makespan and total completion time. For the makespan objective, we first show that the problem is strongly NP-hard even if there are only two possible delay values. We then show that some special cases of the problem are solvable in polynomial time. Finally, we design efficient approximation algorithms for the general case and some special cases. For the total completion time objective, we give optimal polynomial-time algorithm for a special case and an efficient approximation algorithm for another one.


Author(s):  
Ashwin Arulselvan ◽  
Kerem Akartunalı ◽  
Wilco van den Heuvel

AbstractIn a single item dynamic lot-sizing problem, we are given a time horizon and demand for a single item in every time period. The problem seeks a solution that determines how much to produce and carry at each time period, so that we will incur the least amount of production and inventory cost. When the remanufacturing option is included, the input comprises of number of returned products at each time period that can be potentially remanufactured to satisfy the demands, where remanufacturing and inventory costs are applicable. For this problem, we first show that it cannot have a fully polynomial time approximation scheme. We then provide a polynomial time algorithm, when we make certain realistic assumptions on the cost structure.


2004 ◽  
Vol 15 (01) ◽  
pp. 107-125 ◽  
Author(s):  
YVO DESMEDT ◽  
YONGGE WANG

AND/OR graphs and minimum-cost solution graphs have been studied extensively in artificial intelligence (see, e.g., Nilsson [14]). Generally, the AND/OR graphs are used to model problem solving processes. The minimum-cost solution graph can be used to attack the problem with the least resource. However, in many cases we want to solve the problem within the shortest time period and we assume that we have as many concurrent resources as we need to run all concurrent processes. In this paper, we will study this problem and present an algorithm for finding the minimum-time-cost solution graph in an AND/OR graph. We will also study the following problems which often appear in industry when using AND/OR graphs to model manufacturing processes or to model problem solving processes: finding maximum (additive and non-additive) flows and critical vertices in an AND/OR graph. A detailed study of these problems provide insight into the vulnerability of complex systems such as cyber-infrastructures and energy infrastructures (these infrastructures could be modeled with AND/OR graphs). For an infrastructure modeled by an AND/OR graph, the protection of critical vertices should have highest priority since terrorists could defeat the whole infrastructure with the least effort by destroying these critical points. Though there are well known polynomial time algorithms for the corresponding problems in the traditional graph theory, we will show that generally it is NP-hard to find a non-additive maximum flow in an AND/OR graph, and it is both NP-hard and coNP-hard to find a set of critical vertices in an AND/OR graph. We will also present a polynomial time algorithm for finding a maximum additive flow in an AND/OR graph, and discuss the relative complexity of these problems.


10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


Sign in / Sign up

Export Citation Format

Share Document