Worst-case deadline failure probability in real-time applications distributed over controller area network

2000 ◽  
Vol 46 (7) ◽  
pp. 607-617 ◽  
Author(s):  
N. Navet ◽  
Y.-Q. Song ◽  
F. Simonot
2013 ◽  
Vol 29 (4) ◽  
pp. 521-535 ◽  
Author(s):  
Lars Schor ◽  
Iuliana Bacivarov ◽  
Hoeseok Yang ◽  
Lothar Thiele

Author(s):  
C Mohanapriya ◽  
J Govindarajan

<p>The video streaming is one of the important application which consumes more bandwidth compared to non-real-time traffic. Most of the existing video transmissions are either using UDP or RTP over UDP. Since these protocols are not designed with congestion control, they affect the performance of peer video transmissions and the non-real-time applications. Like TFRC, Real-Time Media Congestion Avoidance (RMCAT) is one of the recently proposed frameworks to provide congestion control for real-time applications. Since the need for video transmission is increasing over the wireless LAN, in this paper the performance of the protocol was studied over WLAN with different network conditions. From the detailed study, we observed that RMCAT considers the packet losses due to the distance and channel conditions as congestion loss, and hence it reduced the sending rate thereby it affected the video transmission.</p>


2020 ◽  
Vol 34 (23) ◽  
pp. 2050242
Author(s):  
Yao Wang ◽  
Lijun Sun ◽  
Haibo Wang ◽  
Lavanya Gopalakrishnan ◽  
Ronald Eaton

Cache sharing technique is critical in multi-core and multi-threading systems. It potentially delays the execution of real-time applications and makes the prediction of the worst-case execution time (WCET) of real-time applications more challenging. Prioritized cache has been demonstrated as a promising approach to address this challenge. Instead of the conventional prioritized cache schemes realized at the architecture level by using cache controllers, this work presents two prioritized least recently used (LRU) cache replacement circuits that directly accomplish the prioritization inside the cache circuits, hence significantly reduces the cache access latency. The performance, hardware and power overheads due to the proposed prioritized LRU circuits are investigated based on a 65 nm CMOS technology. It shows that the proposed circuits have very low overhead compared to conventional cache circuits. The presented techniques will lead to more effective prioritized shared cache implementations and benefit the development of high-performance real-time systems.


Sign in / Sign up

Export Citation Format

Share Document