Buffer management schemes for supporting TCP in gigabit routers with per-flow queueing

1999 ◽  
Vol 17 (6) ◽  
pp. 1159-1169 ◽  
Author(s):  
B. Suter ◽  
T.V. Lakshman ◽  
D. Stiliadis ◽  
A.K. Choudhury
Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 573
Author(s):  
Xiaochang Li ◽  
Zhengjun Zhai ◽  
Xin Ye

Emerging scale-out I/O intensive applications are broadly used now, which process a large amount of data in buffer/cache for reorganization or analysis and their performances are greatly affected by the speed of the I/O system. Efficient management scheme of the limited kernel buffer plays a key role in improving I/O system performance, such as caching hinted data for reuse in future, prefetching hinted data, and expelling data not to be accessed again from a buffer, which are called proactive mechanisms in buffer management. However, most of the existing buffer management schemes cannot identify data reference regularities (i.e., sequential or looping patterns) that can benefit proactive mechanisms, and they also cannot perform in the application level for managing specified applications. In this paper, we present an A pplication Oriented I/O Optimization (AOIO) technique automatically benefiting the kernel buffer/cache by exploring the I/O regularities of applications based on program counter technique. In our design, the input/output data and the looping pattern are in strict symmetry. According to AOIO, each application can provide more appropriate predictions to operating system which achieve significantly better accuracy than other buffer management schemes. The trace-driven simulation experiment results show that the hit ratios are improved by an average of 25.9% and the execution times are reduced by as much as 20.2% compared to other schemes for the workloads we used.


2006 ◽  
Vol 24 (2) ◽  
pp. 33-39 ◽  
Author(s):  
S. F. Chien ◽  
A. L. Y. Low ◽  
K. N. Choong ◽  
Y. C. Yee

2014 ◽  
Vol 23 (01) ◽  
pp. 1450012 ◽  
Author(s):  
HEYING ZHANG ◽  
KEFEI WANG ◽  
JIANMIN ZHANG ◽  
NAN WU ◽  
YI DAI

High-radix router based on the tile structure requires large amount of buffer resources. To reduce the buffer space requirement without degrading the throughput of the router, shared buffer management schemes like dynamically allocated multi-queue (DAMQ) can be used by improving the buffer utilization. Unfortunately, it is commonly regarded that DAMQ is slow in write and read. To address this issue, we propose a fast and fair DAMQ structure called F2DAMQ for high-radix routers in this paper. It uses a fast FIFO structure in the implementation of idle address list as well as data buffer and achieves some critical performance improvement such as continuous and concurrent write and read with zero-delay. Besides, F2DAMQ also uses a novel credit management mechanism which is efficient in avoiding one virtual channel (VC) monopolizing the shared part of the buffer and achieving fairness among competing VCs sharing the buffer. The analyses and simulations show that F2DAMQ performs well in achieving low latency, high throughput and good fairness under different traffic patterns.


Author(s):  
Umar Toseef ◽  
Carmelita Goerg ◽  
Thushara Weerawardane ◽  
Andreas Timm-Giel

Sign in / Sign up

Export Citation Format

Share Document