A Novel Call Admission Control Algorithm for Next Generation Wireless Mobile Communication

2017 ◽  
Vol 4 (3) ◽  
pp. 83-95
Author(s):  
T. A. Chavan ◽  
P. Saras

Wireless communication technology is progressing very vastly. With this change in technology customer services for multimedia and non-multimedia are increasing day by day. But due to limited resources of the wireless network, we need to design an efficient CAC algorithm to enhance QoS levels for end users. The Quality of service (QoS) enhancement in the wireless network is related to making an efficient use of current network resources and the optimization of the users. Call acceptance in CAC is one of the challenge in mobile cellular networks to ensure that the acceptance of a new call into a resource limited wireless network should not deviate the service level Agreement (SLAs) at the time of conversations. In the next generation wireless network, CAC has the direct impact on QoS for user calls & overall system performance. To handle handoff calls and new calls in cellular network channel reservation scheme have been already proposed to reserve system bandwidth for higher priority call for CAC. This earlier proposed scheme is not as per the required level of satisfaction because the available reversed bandwidth is not allocated properly in case of least handoff rate. In this, the authors like to present a new channel borrowing scheme where new non real time (NRT) calls can make use of reserved channels. It can borrow this reserved channel on a temporary basis and after this immediately if any handoff call enters the current cell and no any other channels are available, then it will pre-empt the channel from an earlier borrowed NRT user if exists. This pre-empted NRT call is kept in the priority queue to consider its service when any channel becomes free. The number of NRT calls in the queue should not be large to avoid delayed service. The fundamental objective of the proposed scheme to design of the system for evaluating the results and comparing with the results of the existing system. From the results of current research work, it is observed that proposed scheme decreases call dropping probability which increase slightly in call blocking rate over high-density handoff call rate.

Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


2018 ◽  
Vol 11 (2) ◽  
pp. 30-42
Author(s):  
Vinicius Da Silveira Segalin ◽  
Carina Friedrich Dorneles ◽  
Mario Antonio Ribeiro Dantas

Cloud computing is a paradigm that presents many advantages to both costumers and service providers, such as low upfront investment, pay-per-use and easiness of use, delivering/enabling scalable services using Internet technologies. Among many types of services we have today, Database as a Service (DBaaS) is the one where a database is provided in the cloud in all its aspects. Examples of aspects related to DBaaS utilization are data storage, resources management and SLA maintenance. In this context, an important feature, related to it, is resource management and performance, which can be done in many different ways for several reasons, such as saving money, time, and meeting the requirements agreed between client and provider, that are defined in the Service Level Agreement (SLA). A SLA usually tries to protect the costumer from not receiving the contracted service and to ensure that the provider reaches the profit intended. In this paper it is presented a classification based on three main parameters that aim to manage resources for enhancing the performance on DBaaS and guarantee that the SLA is respected for both user and provider sides benefit. The proposal is based upon a survey of existing research work efforts.


2021 ◽  
pp. 1-12
Author(s):  
Muhammad Iftikhar Hussain ◽  
Jingsha He ◽  
Nafei Zhu ◽  
Zulfiqar Ali Zardari ◽  
Fahad Razque ◽  
...  

Cloud computing on-demand dynamicity in nature of end-user that leads towards a hybrid cloud model deployment is called a multi-cloud. Multi-cloud is a multi-tenant and multi-vendor heterogeneous cloud platform in terms of services and security under a defined SLA (service level agreement). The diverse deployment of the multi-cloud model leads to rise in security risks. In this paper, we define a multi-cloud model with hybridization of vendor and security to increase the end-user experience. The proposed model has a heterogeneous cloud paradigm with a combination of firewall tracts to overcome rising security issues. The proposed work consists of three steps, firstly, all incoming traffic from the consumer end into five major groups called ambient. Secondly, design a next-generation firewall (NGFW) topology with a mixture of tree-based and demilitarized zone (DMZ) implications. Test implementation of designed topology performed by using a simple DMZ technique in case of vendor-specific model and NGFW on hybrid vendor based multi-cloud model. Furthermore, it also defines some advantages of NGFW to overcome these concerns. The proposed work is helpful for the new consumer to define their dynamic secure cloud services under a single SLA before adopting a multi-cloud platform. Finally, results are compared in terms of throughput and CPU utilization in both cases.


2017 ◽  
Vol 25 (1) ◽  
pp. 61-66
Author(s):  
Nguyen Cao Phuong ◽  
Tran Hong Quan ◽  
Sang-Ho Lee ◽  
Jung-Mo Moon

The most important thing is guarantee QoS over wireless infrastructures. The efficient of service level agreement (SLA) is becoming increasingly important to both service providers and customers. This paper presents some traffic control schemes for improving QoS, trafficmodel and performance evaluation are described. We are defining a new scheme for improving handoff call performances in wireless networks, a finite queuing scheme for the handoff calls. SLA measurement calculates the packet delay parameter (PD) of handoff calls. The handoff calls will be accepted into queue if their PD will be smaller than the average waiting time of the queue. Important performance measures of the suggested scheme such as the blocking probability of new call and dropping probability of handoff call are described and evaluated.  


When a Physical Machine gets a job from user, it intends to complete it at any cost. Virtual Machine (VM) helps to attain maximum completion ratio. The Host to VM ratio increases with the increase in the workload over the system. The allocation policy of VM has ambiguities with leads to an overloaded Physical Machine (PM). This paper aims to reduce the overhead of the PMs. For the allocation, Modified Best Fit Decreasing (MBFD) algorithm is used to check the resources availability. For the allocation, Modified Best Fit Decreasing (MBFD) algorithm is used to check the resources availability. Genetic Algorithm (GA) has been used to optimize the MBFD performance by fitness function. For the cross-validation Polynomial Support Vector Machine (P-SVM) is used. It has been utilized for training and classification and accordingly, parameters, viz. (Service Level Agreement) SLA and Job Completion Ratio (JCR) are evaluated. A comparative analysis has been drawn in this article to depict the research work effectiveness and an improvement of 70% is perceived.


2020 ◽  
Vol 17 (9) ◽  
pp. 4213-4218
Author(s):  
H. S. Madhusudhan ◽  
T. Satish Kumar ◽  
G. Mahesh

Cloud computing provides on demand service on internet using network of remote servers. The pivotal role for any cloud environment would be to schedule tasks and the virtual machine scheduling have key role in maintaining Quality of Service (QOS) and Service Level Agreement (SLA). Task scheduling is the process of scheduling task (user requests) to certain resources and it is an NP-complete problem. The primary objectives of scheduling algorithms are to minimize makespan and improve resource utilization. In this research work an attempt is made to implement Artificial Neural Network (ANN), which is a methodology in machine learning technique and it is applied to implement task scheduling. It is observed that neural network trained with genetic algorithm will outperforms default genetic algorithm by an average efficiency of 25.56%.


2020 ◽  
Vol 12 (21) ◽  
pp. 9255
Author(s):  
Madhubala Ganesan ◽  
Ah-Lian Kor ◽  
Colin Pattinson ◽  
Eric Rondeau

Internet of Things (IoT) coupled with big data analytics is emerging as the core of smart and sustainable systems which bolsters economic, environmental and social sustainability. Cloud-based data centers provide high performance computing power to analyze voluminous IoT data to provide invaluable insights to support decision making. However, multifarious servers in data centers appear to be the black hole of superfluous energy consumption that contributes to 23% of the global carbon dioxide (CO2) emissions in ICT (Information and Communication Technology) industry. IoT-related energy research focuses on low-power sensors and enhanced machine-to-machine communication performance. To date, cloud-based data centers still face energy–related challenges which are detrimental to the environment. Virtual machine (VM) consolidation is a well-known approach to affect energy-efficient cloud infrastructures. Although several research works demonstrate positive results for VM consolidation in simulated environments, there is a gap for investigations on real, physical cloud infrastructure for big data workloads. This research work addresses the gap of conducting real physical cloud infrastructure-based experiments. The primary goal of setting up a real physical cloud infrastructure is for the evaluation of dynamic VM consolidation approaches which include integrated algorithms from existing relevant research. An open source VM consolidation framework, Openstack NEAT is adopted and experiments are conducted on a Multi-node Openstack Cloud with Apache Spark as the big data platform. Open sourced Openstack has been deployed because it enables rapid innovation, and boosts scalability as well as resource utilization. Additionally, this research work investigates the performance based on service level agreement (SLA) metrics and energy usage of compute hosts. Relevant results concerning the best performing combination of algorithms are presented and discussed.


Sign in / Sign up

Export Citation Format

Share Document