scholarly journals Recorp: Receiver-oriented Policies for Industrial Wireless Networks

2021 ◽  
Vol 17 (4) ◽  
pp. 1-32
Author(s):  
Ryan Brummet ◽  
Md Kowsar Hossain ◽  
Octav Chipara ◽  
Ted Herman ◽  
David E. Stewart

Future Industrial Internet-of-Things (IIoT) systems will require wireless solutions to connect sensors, actuators, and controllers as part of high data rate feedback-control loops over real-time flows. A key challenge in such networks is to provide predictable performance and adaptability in response to link quality variations. We address this challenge by developing RECeiver ORiented Policies (Recorp), which leverages the stability of IIoT workloads by combining offline policy synthesis and run-time adaptation. Compared to schedules that service a single flow in a slot, Recorp policies share slots among multiple flows by assigning a coordinator and a list of flows that may be serviced in the same slot. At run-time, the coordinator will execute one of the flows depending on which flows the coordinator has already received. A salient feature of Recorp is that it provides predictable performance: a policy meets the end-to-end reliability and deadline of flows when the link quality exceeds a user-specified threshold. Experiments show that across IIoT workloads, policies provided a median increase of 50% to 142% in real-time capacity and a median decrease of 27% to 70% in worst-case latency when schedules and policies are configured to meet an end-to-end reliability of 99%.

Author(s):  
Jia Xu

In most embedded, real-time applications, processes need to satisfy various important constraints and dependencies, such as release times, offsets, precedence relations, and exclusion relations. Embedded, real-time systems with high assurance requirements often must execute many different types of processes with such constraints and dependencies. Some of the processes may be periodic and some of them may be asynchronous. Some of the processes may have hard deadlines and some of them may have soft deadlines. For some of the processes, especially the hard real-time processes, complete knowledge about their characteristics can and must be acquired before run-time. For other processes, prior knowledge of their worst case computation time and their data requirements may not be available. It is important for many embedded real-time systems to be able to simultaneously satisfy as many important constraints and dependencies as possible for as many different types of processes as possible. In this paper, we discuss what types of important constraints and dependencies can be satisfied among what types of processes. We also present a method which guarantees that, for every process, no matter whether it is periodic or asynchronous, and no matter whether it has a hard deadline or a soft deadline, as long as the characteristics of that process are known before run-time, then that process will be guaranteed to be completed before predetermined time limits, while simultaneously satisfying many important constraints and dependencies with other processes.


Author(s):  
Jia Xu

Many embedded systems applications have hard timing requirements where real-time processes with precedence and exclusion relations must be completed before specified deadlines. This requires that the worst-case computation times of the real-time processes be estimated with sufficient precision during system design, which sometimes can be difficult in practice. If the actual computation time of a real-time process during run-time exceeds the estimated worst-case computation time, an overrun will occur, which may cause the real-time process to not only miss its own deadline, but also cause a cascade of other real-time processes to also miss their deadline, possibly resulting in total system failure. However, if the actual computation time of a real-time process during run-time is less than the estimated worst-case computation time, an underrun will occur, which may result in under-utilization of system resources. This paper describes a method for handling underruns and overruns when scheduling a set of real-time processes with precedence and exclusion relations using a pre-run-time schedule. The technique effectively tracks and utilizes unused processor time resources to reduce the chances of missing real-time process deadlines, thereby providing the capability to significantly increase both system utilization and system robustness in the presence of inaccurate estimates of the worst-case computation times of real-time processes.


Author(s):  
Jia Xu

In hard real-time and embedded multiprocessor system real-world applications, it is very important to strive to minimize the run-time overhead of the scheduler as much as possible, especially in hard real-time and embedded multiprocessor systems with limited processor and system resources. In this paper, we present a method that reduces the worst-case time complexity of the run-time scheduler for re-computing latest start times and for selecting processes for execution on a multiprocessor at run-time to O(n), where n is the number of processes.


Author(s):  
Jia Xu

Methods for handling process underruns and overruns when scheduling a set of real-time processes increase both system utilization and robustness in the presence of inaccurate estimates of the worst-case computations of real-time processes. In this paper, we present a method that efficiently re-computes latest start times for real time processes during run-time in the event that a real-time process is preempted or has completed (or overrun). The method effectively identifies which process latest start times will be affected by the preemption or completion of a process. Hence the method is able to effectively reduce real-time system overhead by selectively re-computing latest start times for the specific processes whose latest start times are changed by a process preemption or completion, as opposed to indiscriminately re-computing latest start times for all the processes.


Author(s):  
Sang-Hun Lee ◽  
Hyun-Wook Jin ◽  
Kanghee Kim ◽  
Sangil Lee

In designing a distributed hard real-time system, it is important to reduce the end-to-end delay of each real-time message in order to ensure quick responses to external inputs and a high degree of synchronization among cooperating actuators. In order to provide a real-time guarantee for each message, the related literature has focused on the analysis of end-to-end delays based on worst-case task phasing. However, such analyses are too pessimistic because they do not assume a global clock. With the assumption that task phases can be managed by using a global clock provided by emerging real-time fieldbuses, such as EtherCAT, we can try to calculate the optimal task phasing that yields the minimal worst-case end-to-end delay. In this study, we propose a heuristic to manage the phase offsets in the distributed tasks to reduce the theoretical end-to-end delay bound. The proposed heuristic reduces the search time for a solution by identifying time intervals where actual communication occurs among inter-dependent tasks. Furthermore, to analyze the distribution of endto- end delays in different phases, we implemented a simulation tool. The simulation results showed that the proposed heuristic can reduce worst-case end-to-end delay as well as jitter in end-to-end delays.


2020 ◽  
Vol 124 (1279) ◽  
pp. 1399-1435
Author(s):  
Q. Xu ◽  
X. Yang

ABSTRACTDistributed real-time avionics networks have been subjected to a great evolution in terms of functionality and complexity. A direct consequence of this evolution is a continual growth of data exchange. AFDX standardised as ARINC 664 is chosen as the backbone network for those distributed real-time avionics networks as it offers high throughput and does not require global clock synchronisation. For certification reasons and engineering research, a deterministic upper bound of the end-to-end transmission delay for packets of each flow should be guaranteed. The Forward Approach (FA) is proposed for the computation of the worst-case end-to-end transmission delay. This approach iteratively estimates the maximum backlog (amount of the pending packets) in each visited switch along the transmission path, and the worst-case end-to-end transmission delay can be computed. Although it is pessimistic (overestimated), the Forward Approach can provide a tighter upper bound of the end-to-end transmission delay while considering the serialisation effect. Recently, our research finds the computation of the serialisation effect might induce an optimistic (underestimated) upper bound. In this paper, we analyse the pessimism in the Forward Approach and the optimism induced by the computation of the serialisation effect, and then we provide a new computation of the serialisation effect. We compare this new computation with the original one, the experiments show that the new computation reduces the optimism and the upper bound of the end-to-end transmission delay can be computed more accurately.


2001 ◽  
Author(s):  
Raj Rajkumar ◽  
K. Juvva ◽  
A. Molano ◽  
S. Oikawa ◽  
C. Lee
Keyword(s):  

Author(s):  
Neetika Jain ◽  
Sangeeta Mittal

Background: Real Time Wireless Sensor Networks (RT-WSN) have hard real time packet delivery requirements. Due to resource constraints of sensors, these networks need to trade-off energy and latency. Objective: In this paper, a routing protocol for RT-WSN named “SPREAD” has been proposed. The underlying idea is to reserve laxity by assuming tighter packet deadline than actual. This reserved laxity is used when no deadline-meeting next hop is available. Objective: As a result, if due to repeated transmissions, energy of nodes on shortest path is drained out, then time is still left to route the packet dynamically through other path without missing the deadline. Results: Congestion scenarios have been addressed by dynamically assessing 1-hop delays and avoiding traffic on congested paths. Conclusion: Through extensive simulations in Network Simulator NS2, it has been observed that SPREAD algorithm not only significantly reduces miss ratio as compared to other similar protocols but also keeps energy consumption under control. It also shows more resilience towards high data rate and tight deadlines than existing popular protocols.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-38
Author(s):  
Ali Bibak ◽  
Charles Carlson ◽  
Karthekeyan Chandrasekaran

Finding locally optimal solutions for MAX-CUT and MAX- k -CUT are well-known PLS-complete problems. An instinctive approach to finding such a locally optimum solution is the FLIP method. Even though FLIP requires exponential time in worst-case instances, it tends to terminate quickly in practical instances. To explain this discrepancy, the run-time of FLIP has been studied in the smoothed complexity framework. Etscheid and Röglin (ACM Transactions on Algorithms, 2017) showed that the smoothed complexity of FLIP for max-cut in arbitrary graphs is quasi-polynomial. Angel, Bubeck, Peres, and Wei (STOC, 2017) showed that the smoothed complexity of FLIP for max-cut in complete graphs is ( O Φ 5 n 15.1 ), where Φ is an upper bound on the random edge-weight density and Φ is the number of vertices in the input graph. While Angel, Bubeck, Peres, and Wei’s result showed the first polynomial smoothed complexity, they also conjectured that their run-time bound is far from optimal. In this work, we make substantial progress toward improving the run-time bound. We prove that the smoothed complexity of FLIP for max-cut in complete graphs is O (Φ n 7.83 ). Our results are based on a carefully chosen matrix whose rank captures the run-time of the method along with improved rank bounds for this matrix and an improved union bound based on this matrix. In addition, our techniques provide a general framework for analyzing FLIP in the smoothed framework. We illustrate this general framework by showing that the smoothed complexity of FLIP for MAX-3-CUT in complete graphs is polynomial and for MAX - k - CUT in arbitrary graphs is quasi-polynomial. We believe that our techniques should also be of interest toward showing smoothed polynomial complexity of FLIP for MAX - k - CUT in complete graphs for larger constants k .


Sign in / Sign up

Export Citation Format

Share Document