worst case execution time
Recently Published Documents


TOTAL DOCUMENTS

174
(FIVE YEARS 21)

H-INDEX

16
(FIVE YEARS 2)

Author(s):  
Federico Reghenzani

AbstractThe difficulties in estimating the Worst-Case Execution Time (WCET) of applications make the use of modern computing architectures limited in real-time systems. Critical embedded systems require the tasks of hard real-time applications to meet their deadlines, and formal proofs on the validity of this condition are usually required by certification authorities. In the last decade, researchers proposed the use of probabilistic measurement-based methods to estimate the WCET instead of traditional static methods. In this chapter, we summarize recent theoretical and quantitative results on the use of probabilistic approaches to estimate the WCET presented in the PhD thesis of the author, including possible exploitation scenarios, open challenges, and future directions.


2021 ◽  
Vol 20 (6) ◽  
pp. 1-36
Author(s):  
Márton Búr ◽  
Kristóf Marussy ◽  
Brett H. Meyer ◽  
Dániel Varró

Runtime monitoring plays a key role in the assurance of modern intelligent cyber-physical systems, which are frequently data-intensive and safety-critical. While graph queries can serve as an expressive yet formally precise specification language to capture the safety properties of interest, there are no timeliness guarantees for such auto-generated runtime monitoring programs, which prevents their use in a real-time setting. While worst-case execution time (WCET) bounds derived by existing static WCET estimation techniques are safe, they may not be tight as they are unable to exploit domain-specific (semantic) information about the input models. This article presents a semantic-aware WCET analysis method for data-driven monitoring programs derived from graph queries. The method incorporates results obtained from low-level timing analysis into the objective function of a modern graph solver. This allows the systematic generation of input graph models up to a specified size (referred to as witness models ) for which the monitor is expected to take the most time to complete. Hence, the estimated execution time of the monitors on these graphs can be considered as safe and tight WCET. Additionally, we perform a set of experiments with query-based programs running on a real-time platform over a set of generated models to investigate the relationship between execution times and their estimates, and we compare WCET estimates produced by our approach with results from two well-known timing analyzers, aiT and OTAWA.


2021 ◽  
Author(s):  
Jessica Junia Santillo Costa ◽  
Romulo Silva de Oliveira ◽  
Luis Fernando Arcaro

2021 ◽  
Author(s):  
Bruno Dourado Miranda ◽  
Rômulo Silva De Oliveira ◽  
Andreu Carminati

Real-Time Operating Systems (RTOS) have their own modules that need to be executed to manage system resources and such modules add overhead to task response times. FreeRTOS is used for experimental purposes since its is a widely used open-source RTOS. This work presents the investigation of two important sources of overhead: Function Tick, a FreeRTOS time marker, and the Context Switch between tasks. In this paper we also describe a model for reducing Tick analysis pessimism due to its temporal variation. Experiments measuring the execution time of Tick and Context Switch on ARM-Cortex M4 microprocessor were made to present the Best-Case Execution Time and the Worst-Case Execution time within a periodic task scenario. Measurements are used to validate the analytic models.


Energies ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 1747
Author(s):  
Simona Ramanauskaite ◽  
Asta Slotkiene ◽  
Kornelija Tunaityte ◽  
Ivan Suzdalev ◽  
Andrius Stankevicius ◽  
...  

Worst-case execution time (WCET) is an important metric in real-time systems that helps in energy usage modeling and predefined execution time requirement evaluation. While basic timing analysis relies on execution path identification and its length evaluation, multi-thread code with critical section usage brings additional complications and requires analysis of resource-waiting time estimation. In this paper, we solve a problem of worst-case execution time overestimation reduction in situations when multiple threads are executing loops with the same critical section usage in each iteration. The experiment showed the worst-case execution time does not take into account the proportion between computational and critical sections; therefore, we proposed a new worst-case execution time calculation model to reduce the overestimation. The proposed model results prove to reduce the overestimation on average by half in comparison to the theoretical model. Therefore, this leads to more accurate execution time and energy consumption estimation.


Author(s):  
Guilherme Isaias Debom Machado ◽  
Fabian Luis Vargas ◽  
Celso Maciel da Costa

The execution time is a requirement as much important as the computed result when designing real-time systems for critical applications. It is imperative to know the possible execution times, especially when some system delay may incur in equipment damages or even in crew injuries. With that in mind, the current work analyzes different techniques to define the Probabilistic Worst Case Execution Time (pWCET) using the Extreme Value Theory (EVT). Since probabilistic methodologies have been widely explored, this study aims to assure how accurate the pWCET estimations are when applying EVT knowledge. This analysis aims to compare system pWCET estimations to this real behavior, predicting the upper bound execution limits of two algorithms on MIPS processor. Further, this work regards the Block Maxima technique, which select the highest measured values to define a probabilistic distribution that represents the analyzed system. Based on the outcomes the Block Maxima technique points some limitations as requiring a large number of samples to get a reliable analysis. The obtained results have shown that EVT is a useful and trustworthy technique to define pWCET estimations.


Author(s):  
Peter Marwedel

AbstractDuring the design procedure, we have to check repeatedly whether or not the system under design is likely to perform its function and to satisfy all relevant design objectives. This is the purpose of validations and evaluations which must be performed during the design process. This chapter starts with a presentation of techniques for the evaluation of (partial) designs with respect to objectives. In particular, we consider (worst case) execution time, quality of results, thermal behavior, and dependability as objectives. We provide an introduction into fundamental techniques for computing the worst case execution time. Examples of energy models will be presented in order to demonstrate the need for an adjustment of the level of model details to the particular application at hand. Thermal modeling is reduced to the problem of equivalent electrical modeling. With respect to dependability, an introduction to statistical models of reliability as well as an introduction to fault trees are included. As a means for relating results for the different objectives against each other, we introduce the concept of Pareto optimality. This chapter closes with hints regarding validation techniques, including simulation, rapid prototyping, and formal verification.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1873
Author(s):  
Iván Gamino del Río ◽  
Agustín Martínez Hellín ◽  
Óscar R. Polo ◽  
Miguel Jiménez Arribas ◽  
Pablo Parra ◽  
...  

Code instrumentation enables the observability of an embedded software system during its execution. A usage example of code instrumentation is the estimation of “worst-case execution time” using hybrid analysis. This analysis combines static code analysis with measurements of the execution time on the deployment platform. Static analysis of source code determines where to insert the tracing instructions, so that later, the execution time can be captured using a logic analyser. The main drawback of this technique is the overhead introduced by the execution of trace instructions. This paper proposes a modification of the architecture of a RISC pipelined processor that eliminates the execution time overhead introduced by the code instrumentation. In this way, it allows the tracing to be non-intrusive, since the sequence and execution times of the program under analysis are not modified by the introduction of traces. As a use case of the proposed solution, a processor, based on RISC-V architecture, was implemented using VHDL language. The processor, synthesized on a FPGA, was used to execute and evaluate a set of examples of instrumented code generated by a “worst-case execution time” estimation tool. The results validate that the proposed architecture executes the instrumented code without overhead.


Sign in / Sign up

Export Citation Format

Share Document