scholarly journals Translation lookaside buffer management

Author(s):  
Y. I. Klimiankou

This paper focuses on the Translation Lookaside Buffer (TLB) management as part of memory management. TLB is an associative cache of the advanced processors, which reduces the overhead of the virtual to physical address translations. We consider challenges related to the design of the TLB management subsystem of the OS kernel on the example of the IA-32 platform and propose a simple model of complete and consistent policy of TLB management. This model can be used as a foundation for memory management subsystems design and verification.

Author(s):  
Eunji Lee

This article explores the performance optimizations of an embedded database memory management system to ensure high responsiveness of real-time healthcare data frameworks. SQLite is a popular embedded database engine extensively used in medical and healthcare data storage systems. However, SQLite is essentially built around lightweight applications in mobile devices, and it significantly deteriorates when a large transaction is issued such as high resolution medical images or massive health dataset, which is unlikely to occur in embedded systems but is quite common in other systems. Such transactions do not fit in the in-memory buffer of SQLite, and SQLite enforces memory reclamation as they are processed. The problem is that the current SQLite buffer management scheme does not effectively manage these cases, and the naïve reclamation scheme used significantly increases the user-perceived latency. Motivated by this limitation, this paper identifies the causes of high latency during processing of a large transaction, and overcomes the limitation via proactive and coarse-grained memory cleaning in SQLite.The proposed memory reclamation scheme was implemented in SQLite 3.29, and measurement studies with a prototype implementation demonstrated that the SQLite operation latency decreases by 13% on an average and up to 17.3% with our memory reclamation scheme as compared to that of the original version.


2021 ◽  
Vol 9 (01) ◽  
pp. 1138-1156
Author(s):  
Gballou Yao Theophile ◽  
◽  
Toure Kidjegbo Augustin ◽  
Tiecoura Yves ◽  
◽  
...  

Vehicular Delay-Tolerant Networks (VDTNs) are vehicle networks where there is no end-to-end connectivity between source and destination. As a result, VDTNs rely on cooperation between the different nodes to improve its performance. However, the presence of selfish nodes that refuse to participate in the routing protocol causes a deterioration of the overall performance of these networks. In order to reduce the impact of these selfish nodes, proposed strategies, on the one hand, use the nodes transmission rate that does not take into account the message priority class of service, and on the other hand, are based on traditional buffer management systems (FIFO, Random). As a result, quality of service is not guaranteed in this type of network where different applications are derived from messages with different priorities. In this paper, we propose a strategy for detecting selfish nodes and taking action against them in relation to priority classes in order to reduce their impacts. The operation of this strategy is based, on a partitioned memory management system taking into account the priority and the lifetime of messages, on the calculation of the transmission rate of the node with respect to the priority class of the node with the highest delivery predictability, on a mechanism for calculating the nodes degree of selfishness with respect to the priority class, and on the monitoring mechanism. . The simulations carried out show that the proposed model can detect selfish nodes and improve network performance in terms of increasing the delivery rate of high-priority messages, reducing the delivery delay of high-priority messages, and reducing network overload.


Author(s):  
Sanket Suresh Naik Dessai ◽  
Varuna Eswer

Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation of embedded testing procedure to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS). The implementation proposed for embedded testing in the paper considers the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements embedding testbed with a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the testbed implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. In this testbed twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.


Author(s):  
Varuna Eswer ◽  
Sanket S Naik Dessai

<p><span>Processor efficiency is a important in embedded system. The efficiency of the processor depends on the L1 cache and translation lookaside buffer (TLB). It is required to understand the L1 cache and TLB performances during varied load for the execution on the processor and hence studies the performance of the varing load and its performance with caches with MIPS and operating system (OS) are studied in this paper. The proposed methods of implementation in the paper considers the counting of the instruction exxecution for respective cache and TLB management and the events are measured using a dedicated counters in software. The software counters are used as there are limitation to hardware counters in the MIPS32. Twenty-seven metrics are considered for analysis and proper identification and implemented for the performance measurement of L1 cache and TLB on the MIPS32 processor. The generated data helps in future research in compiler tuning, memory management design for OS, analysing architectural issues, system benchmarking, scalability, address space analysis, studies of bus communication among processor and its workload sharing characterisation and kernel profiling.</span></p>


2021 ◽  
Vol 9 (01) ◽  
pp. 1092-1104
Author(s):  
Gballou Yao Theophile ◽  
◽  
Toure Kidjegbo Augustin ◽  
Tiecoura Yves ◽  
◽  
...  

In vehicular delay-tolerant networks, buffer management systems are developed to improve overall performance. However, these buffer memory management systems cannot simultaneously reduce network overload, reduce high priority message delivery time limit, and improve all priority class message delivery rates. As a result, quality of service is not guaranteed. In this paper, we propose a drop policy based on the constitution of two queues according to message weight, the position of the node in relation to the destination and the comparison of the oldness between the high-priority message and the messages in the low-priority queue. The results of the simulations show that compared to the existing buffer management policy based on time-to-live and priority, our strategy simultaneously reduces network overload, reduces the delivery time limit of high-priority messages and allows for an increase in the delivery rate of messages regardless of their priority.


10.29007/c2f1 ◽  
2018 ◽  
Author(s):  
Hira Syeda ◽  
Gerwin Klein

The main security mechanism for enforcing memory isolation in operating systems is provided by page tables. The hardware-implemented Translation Lookaside Buffer (TLB) caches these, and therefore the TLB and its consistency with memory are security crit- ical for OS kernels, including formally verified kernels such as seL4. If performance is paramount, this consistency can be subtle to achieve; yet, all major formally verified ker- nels currently leave the TLB as an assumption.In this paper, we present a formal model of the Memory Management Unit (MMU) for the ARM architecture which includes the TLB, its maintenance operations, and its derived properties. We integrate this specification into the Cambridge ARM model. We derive sufficient conditions for TLB consistency, and we abstract away the functional details of the MMU for simpler reasoning about executions in the presence of cached address translation, including complete and partial walks.


2012 ◽  
Author(s):  
Alexander Medvinsky ◽  
Alexey Rusakov
Keyword(s):  

2011 ◽  
Author(s):  
Riley E. Splittstoesser ◽  
Greg G. Knapik ◽  
William S. Marras
Keyword(s):  

1976 ◽  
Vol 37 (2) ◽  
pp. 149-158 ◽  
Author(s):  
A.K. Bhattacharjee ◽  
B. Caroli ◽  
D. Saint-James
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document