Resource allocation schemes for non-real-time bursty traffic in wireless ATM networks

Author(s):  
M. Inoue ◽  
H. Morikawa ◽  
M. Hatori ◽  
M. Mizumachi
2020 ◽  
Vol 16 (8) ◽  
pp. 155014772093275 ◽  
Author(s):  
Muhammad Shuaib Qureshi ◽  
Muhammad Bilal Qureshi ◽  
Muhammad Fayaz ◽  
Wali Khan Mashwani ◽  
Samir Brahim Belhaouari ◽  
...  

An efficient resource allocation scheme plays a vital role in scheduling applications on high-performance computing resources in order to achieve desired level of service. The major part of the existing literature on resource allocation is covered by the real-time services having timing constraints as primary parameter. Resource allocation schemes for the real-time services have been designed with various architectures (static, dynamic, centralized, or distributed) and quality of service criteria (cost efficiency, completion time minimization, energy efficiency, and memory optimization). In this analysis, numerous resource allocation schemes for real-time services in various high-performance computing (distributed and non-distributed) domains have been studied and compared on the basis of common parameters such as application type, operational environment, optimization goal, architecture, system size, resource type, optimality, simulation tool, comparison technique, and input data. The basic aim of this study is to provide a consolidated platform to the researchers working on scheduling and allocating high-performance computing resources to the real-time services. This work comprehensively discusses, integrates, analysis, and categorizes all resource allocation schemes for real-time services into five high-performance computing classes: grid, cloud, edge, fog, and multicore computing systems. The workflow representations of the studied schemes help the readers in understanding basic working and architectures of these mechanisms in order to investigate further research gaps.


2016 ◽  
Vol 36 (1) ◽  
pp. 163-171
Author(s):  
UN Nwawelu ◽  
CI Ani ◽  
MA Ahaneku

The growth in the good number of real-time and non-real-time applications has sparked a renewed interest in exploring resource allocation schemes that can be efficient and fair to all the applications in overloaded scenarios. In this paper, the performance of six scheduling algorithms for Long Term Evolution (LTE) downlink networks were analyzed and compared. These algorithms are Proportional Fair (PF), Exponential/Proportional Fair (EXP/PF), Maximum Largest Weighted Delay First (MLWDF), Frame Level Scheduler (FLS), Exponential (EXP) rule and Logarithmic (LOG) rule.  The performances of these algorithms were evaluated using an open source simulator (LTE simulator) and compared based on network parameters which include: throughput, delay, Packet Loss Ratio (PLR), and fairness. This work aims at giving insight on the gains made on radio resource scheduling for LTE network and to x-ray the issues that require improvement in order to provide better performance to the users. The results of this work show that FLS algorithm outperforms other algorithms in terms of delay, PLR, throughput, and fairness for VoIP and video flow. It was also observed that for Best Effort (BE) flows, FLS outperforms other algorithms in terms of delay and PLR but performed least in terms of throughput and fairness. http://dx.doi.org/10.4314/njt.v36i1.21


Sign in / Sign up

Export Citation Format

Share Document