scholarly journals AN EXPERIMENTAL ANALYSIS OF SPATIAL INDEXING ALGORITHMS FOR REAL TIME SAFETY CRITICAL MAP APPLICATION

Author(s):  
F. Çetin ◽  
M. O. Kulekci

Abstract. This paper presents a study that compares the three space partitioning and spatial indexing techniques, KD Tree, Quad KD Tree, and PR Tree. KD Tree is a data structure proposed by Bentley (Bentley and Friedman, 1979) that aims to cluster objects according to their spatial location. Quad KD Tree is a data structure proposed by Berezcky (Bereczky et al., 2014) that aims to partition objects using heuristic methods. Unlike Bereczky’s partitioning technique, a new partitioning technique is presented based on dividing objects according to space-driven, in the context of this study. PR Tree is a data structure proposed by Arge (Arge et al., 2008) that is an asymptotically optimal R-Tree variant, enables data-driven segmentation. This study mainly aimed to search and render big spatial data in real-time safety-critical avionics navigation map application. Such a real-time system needs to efficiently reach the required records inside a specific boundary. Performing range query during the runtime (such as finding the closest neighbors) is extremely important in performance. The most crucial purpose of these data structures is to reduce the number of comparisons to solve the range searching problem. With this study, the algorithms’ data structures are created and indexed, and worst-case analyses are made to cover the whole area to measure the range search performance. Also, these techniques’ performance is benchmarked according to elapsed time and memory usage. As a result of these experimental studies, Quad KD Tree outperformed in range search analysis over the other techniques, especially when the data set is massive and consists of different geometry types.

Author(s):  
M. A. Ganter ◽  
B. P. Isarankura

Abstract A technique termed space partitioning is employed which dramatically reduces the computation time required to detect dynamic collision during computer simulation. The simulated environment is composed of two nonconvex polyhedra traversing two general six degree of freedom trajectories. This space partitioning technique reduces collision detection time by subdividing the space containing a given object into a set of linear partitions. Using these partitions, all testing can be confined to the local region of overlap between the two objects. Further, all entities contained in the partitions inside the region of overlap are ordered based on their respective minimums and maximums to further reduce testing. Experimental results indicate a worst-case collision detection time for two one thousand faced objects is approximately three seconds per trajectory step.


2014 ◽  
Vol 543-547 ◽  
pp. 1972-1976
Author(s):  
Huai Lin Dong ◽  
Ming Yuan He ◽  
Qing Feng Wu ◽  
Sheng Hang Wu

When membership queries are evaluated in a set, the performance can be improved by a Bloom filter which is a space-efficient probabilistic data structure. According to its space-efficient character, Bloom Filter presented to address the load balancing problem for streaming media information in Storm system which is free and open source distributed real time computation system. This method increases the server cluster availability by balancing the workloads among the servers within a cluster. Additionally, it improves real time system Storm efficiently in saving the data transmission time and reducing the calculation complexity.


2010 ◽  
Vol 46 (2) ◽  
pp. 251-300 ◽  
Author(s):  
Heiko Falk ◽  
Paul Lokuciejewski

Abstract The current practice to design software for real-time systems is tedious. There is almost no tool support that assists the designer in automatically deriving safe bounds of the worst-case execution time (WCET) of a system during code generation and in systematically optimizing code to reduce WCET. This article presents concepts and infrastructures for WCET-aware code generation and optimization techniques for WCET reduction. All together, they help to obtain code explicitly optimized for its worst-case timing, to automate large parts of the real-time software design flow, and to reduce costs of a real-time system by allowing to use tailored hardware.


2014 ◽  
Vol 577 ◽  
pp. 865-872
Author(s):  
Jun Yi Li ◽  
Yi Zhang ◽  
Ren Fa Li

The Real-time system estimates the worst-case execution time (WCET) of the program to ensure the real-time requirements of the system. In this paper, a test method based on Associative Process Communication (APC) is put forward. First it tests the WCET value of basic blocks of ICFG through the use of APC algorithm, and then estimates the WCET by analyzing the worst execution path of the basic block. APC test method tests all benchmarks of Mälardalen. And the test results show that the proposed test method is precise and effective, and the test error is within the theoretical analysis.


Author(s):  
Jia Xu

Methods for handling process underruns and overruns when scheduling a set of real-time processes increase both system utilization and robustness in the presence of inaccurate estimates of the worst-case computations of real-time processes. In this paper, we present a method that efficiently re-computes latest start times for real time processes during run-time in the event that a real-time process is preempted or has completed (or overrun). The method effectively identifies which process latest start times will be affected by the preemption or completion of a process. Hence the method is able to effectively reduce real-time system overhead by selectively re-computing latest start times for the specific processes whose latest start times are changed by a process preemption or completion, as opposed to indiscriminately re-computing latest start times for all the processes.


2017 ◽  
Vol 28 (02) ◽  
pp. 141-169
Author(s):  
George Lagogiannis

In this paper we present the first data structure for partially persistent B-trees with constant worst-case update time, where deletions are handled in a symmetrical to insertions manner. Our structure matches the query times of optimal partially persistent B-trees that do not support constant update time thus, we have managed to reduce the worst-case update time to a constant, without a penalty in the query times. The new data structure is produced by mixing two other data structures, (a) the partially persistent B-tree and (b) the balanced search tree with constant worst-case update time.


2016 ◽  
Vol 25 (06) ◽  
pp. 1650062 ◽  
Author(s):  
Gang Chen ◽  
Kai Huang ◽  
Long Cheng ◽  
Biao Hu ◽  
Alois Knoll

Shared cache interference in multi-core architectures has been recognized as one of major factors that degrade predictability of a mixed-critical real-time system. Due to the unpredictable cache interference, the behavior of shared cache is hard to predict and analyze statically in multi-core architectures executing mixed-critical tasks, which will not only result in difficulty of estimating the worst-case execution time (WCET) but also introduce significant worst-case timing penalties for critical tasks. Therefore, cache management in mixed-critical multi-core systems has become a challenging task. In this paper, we present a dynamic partitioned cache memory for mixed-critical real-time multi-core systems. In this architecture, critical tasks can dynamically allocate and release the cache resourse during the execution interval according to the real-time workload. This dynamic partitioned cache can, on the one hand, provide the predicable cache performance for critical tasks. On the other hand, the released cache can be dynamically used by non-critical tasks to improve their average performance. We demonstrate and prototype our system design on the embedded FPGA platform. Measurements from the prototype clearly demonstrate the benefits of the dynamic partitioned cache for mixed-critical real-time multi-core systems.


Author(s):  
Alan Grigg ◽  
Lin Guan

This chapter describes a real-time system performance analysis approach known as reservation-based analysis (RBA). The scalability of RBA is derived from an abstract (target-independent) representation of system software components, their timing and resource requirements and run-time scheduling policies. The RBA timing analysis framework provides an evolvable modeling solution that can be instigated in early stages of system design, long before the software and hardware components have been developed, and continually refined through successive stages of detailed design, implementation and testing. At each stage of refinement, the abstract model provides a set of best-case and worst-case timing ‘guarantees’ that will be delivered subject to a set of scheduling ‘obligations’ being met by the target system implementation. An abstract scheduling model, known as the rate-based execution model then provides an implementation reference model with which compliance will ensure that the imposed set of timing obligations will be met by the target system.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 618
Author(s):  
Mary Swarna Latha Gade ◽  
K Sreenivasa Ravi

Most of the real time systems have the timing constraints. The main important timing constraints of any real time systems are to meet the deadlines of its application tasks. Not only satisfying the timing constraints of any real-time system, but also the functional correctness of application needs to be guaranteed. Meeting the dead line of application is no use if it deviates from its precise output. Timing constraints of the system can be satisfied by choosing proper task scheduling algorithms and the reliability of the system can be reached by providing fault-tolerance. In this paper, various fault scheduling algorithms like fixed priority, EDF(Earliest dead line first), LLF (Least Laxity First),Rate monotonic etc have been studied and compare the parameters like Worst-case execution times, response time, task missed deadlines, Number of preemption, number of context switches, Deadlock  and processor utilization factor. 


Sign in / Sign up

Export Citation Format

Share Document