buffer management
Recently Published Documents


TOTAL DOCUMENTS

846
(FIVE YEARS 99)

H-INDEX

25
(FIVE YEARS 4)

2022 ◽  
Vol 7 (2) ◽  
pp. 121-132 ◽  
Author(s):  
Shakib Zohrehvandi ◽  
Roya Soltani

In the project management, buffers are considered to handle uncertainties that lead to changes in project scheduling which in turn causes project delivery delay. The purpose of this survey is to discuss the state of the art on models and methods for project buffer management and time optimization of construction projects and manufacturing industries. There are not literally any surveys which review the literature of project buffer management and time optimization. This research adds to the previous literature surveys and focuses mainly on papers after 2014 but with a quick review on previous works. This research investigates the literature from project buffer sizing, project buffer consumption monitoring and project time/resource optimization perspectives.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3143
Author(s):  
Xia Zhou ◽  
Jianqiang Lu ◽  
Xiangpeng Xie ◽  
Chengjie Bu ◽  
Lei Wan ◽  
...  

Accurate prediction of power business communication bandwidth is the premise for the effectiveness of power communication planning and the fundamental guarantee for regular operation of power businesses. To solve the problem of scientifically and reasonably allocating bandwidth resources in smart parks, communication bandwidth prediction technology of intelligent power distribution service for smart parks is proposed in this paper. First, the characteristics of mixed service data arrival rate of power distribution and communication mixed services in smart parks were analyzed. Poisson process and interrupted Poisson process were used to simulate periodic and sudden business of smart parks to realize accurate simulation of the business arrival process. Then, a service arrival rate model based on the Markov modulation Poisson process was constructed. An active buffer management mechanism was used to dynamically discard data packets according to the set threshold and achieve accurate simulation of the packet loss rate during the arrival of smart park business. At the same time, considering the communication service quality index and bandwidth resource utilization, a business communication bandwidth prediction model of smart parks was established to improve the accuracy of business bandwidth prediction. Finally, a smart power distribution room in a smart park was used as an application scenario to quantitatively analyze the relationship between the communication service quality and bandwidth configuration. According to the predicted bandwidth, the reliability and effectiveness of the proposed method were verified by comparison with the elastic coefficient method.


2021 ◽  
Vol 21 (4) ◽  
pp. 1-15
Author(s):  
Ramesh Sekaran ◽  
Rizwan Patan ◽  
Fadi Al-Turjman

A mobile ad hoc network (MANET) is summarized as a combination device that can move, synchronize and converse without any preceding management. Enhancing the lifetime energy is based on the status of the concerned channel. The node is accomplished of control the control messages. Due to unplanned methods of energy conservation, the node lifespan and quality of packet flow is defaced in the existing solution. It results in a network-to-node-energy trade-off, ensuing in a failure of the post-network. This failure results in reduced time-to-live and higher overhead. This paper discusses an effective buffer management mechanism, in addition to proposing a novel performance modeling in Volunteered Computing MANET and tactile internet Next, the best execution the nodes can accomplish under fractional data is completely portrayed for utilities for a general purpose. To associate the space between network efficiency and energy conservation based on the minimal overhead, this article proposes a switch state promoting mutual Optimized MAC protocol for conservation of a node's energy and the optimal use of available nodes before their energy drain. Simulation results are provided as proof of the proposed solution. The simulation results are compared with the existing system with performance measures of delay, throughput, energy consumption, and availability of the node.


2021 ◽  
Vol 13 (12) ◽  
pp. 303
Author(s):  
Xiaoliang Wang ◽  
Peiquan Jin

The traditional page-grained buffer manager in database systems has a low hit ratio when only a few tuples within a page are frequently accessed. To handle this issue, this paper proposes a new buffering scheme called the AMG-Buffer (Adaptive Multi-Grained Buffer). AMG-Buffer proposes to use two page buffers and a tuple buffer to organize the whole buffer. In this way, the AMG-Buffer can hold more hot tuples than a single page-grained buffer. Further, we notice that the tuple buffer may cause additional read I/Os when writing dirty tuples into disks. Thus, we introduce a new metric named clustering rate to quantify the hot-tuple rate in a page. The use of the tuple buffer is determined by the clustering rate, allowing the AMG-Buffer to adapt to different workloads. We conduct experiments on various workloads to compare the AMG-Buffer with several existing schemes, including LRU, LIRS, CFLRU, CFDC, and MG-Buffer. The results show that AMG-Buffer can significantly improve the hit ratio and reduce I/Os compared to its competitors. Moreover, the AMG-Buffer achieves the best performance on a dynamic workload as well as on a large data set, suggesting its adaptivity and scalability to changing workloads.


Author(s):  
Lucas Martins Ikeziri ◽  
Fernando Bernardi de Souza ◽  
Andréia da Silva Meyer ◽  
Mahesh C. Gupta

2021 ◽  
Author(s):  
Yigui Yuan ◽  
Peiquan Jin
Keyword(s):  

Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 369
Author(s):  
Yan Yu ◽  
Xianliang Jiang ◽  
Guang Jin ◽  
Zihang Gao ◽  
Penghui Li

The data center has become the infrastructure of most Internet services, and its network carries different types of business flow, such as query, data backup, control information, etc. At the same time, the throughput-sensitive large flows occupy a lot of bandwidth, resulting in the small flow’s longer completion time, finally affecting the performance of the applications. Recent proposals consider only dynamically adjusting the ECN threshold or reversing the ECN packet priority. This paper combines these two improvements and presents the HDCQ method for coordinating data center queuing, separating large and small flows, and scheduling in order to ensure flow completion time. It uses the ECN mechanism to design load-adaptive marking threshold update algorithms for small flows to prevent micro-bursts from occurring. At the same time, packets marked with ECN or ACK are raised in priority, prompting these packets to be fed back to the sender as soon as possible, effectively reducing the TCP control loop delay. Extensive experimental analysis on the network simulator (NS-2) shows that the HDCQ algorithm has better performance in the face of micro-burst traffic, reducing the average flow completion time by up to 24% compared with the PIAS.


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Yahui Ding ◽  
Jianli Guo ◽  
Xu Li ◽  
Xiujuan Shi ◽  
Peng Yu

The delay tolerant networks (DTN), which have special features, differ from the traditional networks and always encounter frequent disruptions in the process of transmission. In order to transmit data in DTN, lots of routing algorithms have been proposed, like “Minimum Expected Delay,” “Earliest Delivery,” and “Epidemic,” but all the above algorithms have not taken into account the buffer management and memory usage. With the development of intelligent algorithms, Deep Reinforcement Learning (DRL) algorithm can better adapt to the above network transmission. In this paper, we firstly build optimal models based on different scenarios so as to jointly consider the behaviors and the buffer of the communication nodes, aiming to ameliorate the process of the data transmission; then, we applied the Deep Q-learning Network (DQN) and Advantage Actor-Critic (A3C) approaches in different scenarios, intending to obtain end-to-end optimal paths of services and improve the transmission performance. In the end, we compared algorithms over different parameters and find that the models build in different scenarios can achieve 30% end-to-end delay decline and 80% throughput improvement, which show that our algorithms applied in are effective and the results are reliable.


Sign in / Sign up

Export Citation Format

Share Document