scholarly journals Correction to: A compiler framework for the reduction of worst-case execution times

2019 ◽  
Vol 55 (4) ◽  
pp. 925-925
Author(s):  
Heiko Falk ◽  
Paul Lokuciejewski
Author(s):  
Luis Fernando Arcaro ◽  
Karila Palma Silva ◽  
Romulo Silva de Oliveira ◽  
Luis Almeida

Author(s):  
Berkay Saydam ◽  
Cem Orhan ◽  
Niyazi Toker ◽  
Mansur Turasan

For functional safety, the scheduler should perform all time critical tasks in an order and within predefined deadlines in embedded systems. Scheduling of time critical tasks is determined by estimating their worst-case execution times. To justify the model design of task scheduling, it is required to simulate and visualise the task execution and scheduling maps. This helps to figure out possible problems before deploying the schedule model to real hardware. The simulation tools which are used by companies in an industry perform scheduling simulation and visualisation of all time critical tasks to design and verify the model. All of them lack the capability of comparing simulation results versus real results to achieve the optimised scheduling design. This sometimes leads the overestimated worst-case execution times and increased system cost. The aim of our study is to decrease the system cost with optimisation of scheduled tasks via using the static analysing method.   Keywords: Schedule visualisation, scheduler optimisation, functional safety, real-time systems, scheduler.


2010 ◽  
Vol 46 (2) ◽  
pp. 251-300 ◽  
Author(s):  
Heiko Falk ◽  
Paul Lokuciejewski

Abstract The current practice to design software for real-time systems is tedious. There is almost no tool support that assists the designer in automatically deriving safe bounds of the worst-case execution time (WCET) of a system during code generation and in systematically optimizing code to reduce WCET. This article presents concepts and infrastructures for WCET-aware code generation and optimization techniques for WCET reduction. All together, they help to obtain code explicitly optimized for its worst-case timing, to automate large parts of the real-time software design flow, and to reduce costs of a real-time system by allowing to use tailored hardware.


2021 ◽  
Author(s):  
Stefan Draskovic ◽  
Rehan Ahmed ◽  
Pengcheng Huang ◽  
Lothar Thiele

AbstractMixed-criticality systems often need to fulfill safety standards that dictate different requirements for each criticality level, for example given in the ‘probability of failure per hour’ format. A recent trend suggests designing this kind of systems by jointly scheduling tasks of different criticality levels on a shared platform. When this is done, the usual assumption is that tasks of lower criticality are degraded when a higher criticality task needs more resources, for example when it overruns a bound on its execution time. However, a way to quantify the impact this degradation has on the overall system is not well understood. Meanwhile, to improve schedulability and to avoid over-provisioning of resources due to overly pessimistic worst-case execution time estimates of higher criticality tasks, a new paradigm emerged where task’s execution times are modeled with random variables. In this paper, we analyze a system with probabilistic execution times, and propose metrics that are inspired by safety standards. Among these metrics are the probability of deadline miss per hour, the expected time before degradation happens, and the duration of the degradation. We argue that these quantities provide a holistic view of the system’s operation and schedulability.


Sign in / Sign up

Export Citation Format

Share Document