dynamic resource allocation
Recently Published Documents


TOTAL DOCUMENTS

881
(FIVE YEARS 174)

H-INDEX

35
(FIVE YEARS 5)

2022 ◽  
pp. 108710
Author(s):  
Jie Lin ◽  
Lin Huang ◽  
Hanlin Zhang ◽  
Xinyu Yang ◽  
Peng Zhao

2021 ◽  
Vol 17 (4) ◽  
Author(s):  
Misfa Susanto ◽  
Sitronella Nurfitriani Hasim ◽  
Helmy Fitriawan

Ultra-Dense Network (UDN) which is formed from femtocells densely deployed is known as one of key technologies for 5th generation (5G) cellular networks. UDN promises for increased capacity and quality of cellular networks. However, UDN faces more complex interference problems than rarely deployed femtocells, worse on femtocells that are located on cell edge area of macrocell. Therefore, mitigating or reducing effects of interferences is an important issue in UDN. This paper focuses on interference management using dynamic resource allocation for UDN. Types of interference considered in this study are cross-tier (macrocell-to-femtocell) and co-tier (femtocellto-femtocell) interferences for uplink transmission. We consider several scenarios to examine the dynamic resource allocation method for UDN in case of femtocells deployed in the whole area of microcell and in the cell edge area of macrocell. Simulation experiment using MATLAB program has been carried out. The performance parameters that are collected from the simulation are Signal to Interference and Noise Ratio (SINR), throughput, and Bit Error Rate (BER). The obtained simulation results show that system using dynamic resource allocation method outperforms conventional system and the results were consistent for the collected performance parameters. The dynamic resource allocation promises to reduce the effects of interference in UDN.


Author(s):  
Sakshi Chhabra ◽  
Ashutosh Kumar Singh

The cloud datacenter has numerous hosts as well as application requests where resources are dynamic. The demands placed on the resource allocation are diverse. These factors could lead to load imbalances, which affect scheduling efficiency and resource utilization. A scheduling method called Dynamic Resource Allocation for Load Balancing (DRALB) is proposed. The proposed solution constitutes two steps: First, the load manager analyzes the resource requirements such as CPU, Memory, Energy and Bandwidth usage and allocates an appropriate number of VMs for each application. Second, the resource information is collected and updated where resources are sorted into four queues according to the loads of resources i.e. CPU intensive, Memory intensive, Energy intensive and Bandwidth intensive. We demonstarate that SLA-aware scheduling not only facilitates the cloud consumers by resources availability and improves throughput, response time etc. but also maximizes the cloud profits with less resource utilization and SLA (Service Level Agreement) violation penalties. This method is based on diversity of client’s applications and searching the optimal resources for the particular deployment. Experiments were carried out based on following parameters i.e. average response time; resource utilization, SLA violation rate and load balancing. The experimental results demonstrate that this method can reduce the wastage of resources and reduces the traffic upto 44.89% and 58.49% in the network.


2021 ◽  
Vol 18 (22) ◽  
pp. 413
Author(s):  
Ismail Zaharaddeen Yakubu ◽  
Lele Muhammed ◽  
Zainab Aliyu Musa ◽  
Zakari Idris Matinja ◽  
Ilya Musa Adamu

Cloud high latency limitation has necessitated the introduction of Fog computing paradigm that extends computing infrastructures in the cloud data centers to the edge network. Extended cloud resources provide processing, storage and network services to time sensitive request associated to the Internet of Things (IoT) services in network edge. The rapid increase in adoption of IoT devices, variations in user requirements, limited processing and storage capacity of fog resources and problem of fog resources over saturation has made provisioning and allotment of computing resources in fog environment a formidable task. Satisfying application and request deadline is the most substantial challenge compared to other dynamic variations in parameters of client requirements. To curtail these issues, the integrated fog-cloud computing environment and efficient resource selection method is highly required. This paper proposed an agent based dynamic resource allocation that employs the use of host agent to analyze the QoSrequirements of application and request and select a suitable execution layer. The host agent forwards the application request to a layer agent which is responsible for the allocation of best resource that satisfies the requirement of the application request. Host agent and layers agents maintains resource information tables for matching of task and computing resources. CloudSim toolkit functionalities were extended to simulate a realistic fog environment where the proposed method is evaluated. The experimental results proved that the proposed method performs better in terms of processing time, latency and percentage QoS delivery. HIGHLIGHTS The distance between the cloud infrastructure and the edge IoT devices makes the cloud not too competent for some IoT applications, especially the sensitive ones To minimize the latency in the cloud and ensure prompt response to user requests, Fog computing, which extends the cloud services to edge network was introduced The proliferation in adoption of IoT devices and fog resource limitations has made resource scheduling in fog computing a tedious one GRAPHICAL ABSTRACT


2021 ◽  
Author(s):  
Sebastian Perez-Salazar ◽  
Ishai Menache ◽  
Mohit Singh ◽  
Alejandro Toriello

Motivated by maximizing spot instances in cloud shared systems, in this work, we consider the problem of taking advantage of unused resources in highly dynamic cloud environments while preserving users’ performance. We introduce an online model for sharing resources that captures basic properties of cloud systems, such as unpredictable users’ demand patterns, very limited feedback from the system, and service level agreement (SLA) between the users and the cloud provider. We provide a simple and efficient algorithm for the single-resource case. For any demand patterns, our algorithm guarantees near-optimal resource utilization as well as high users’ performance compared with their SLA baseline. In addition to this, we validate empirically the performance of our algorithm using synthetic data and data obtained from Microsoft’s systems.


Sign in / Sign up

Export Citation Format

Share Document