scholarly journals Energy-Efficient Virtual Network Function Reconfiguration Strategy Based on Short-Term Resources Requirement Prediction

Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2287
Author(s):  
Yanyang Liu ◽  
Jing Ran ◽  
Hefei Hu ◽  
Bihua Tang

In Network Function Virtualization, the resource demand of the network service evolves with the change of network traffic. VNF dynamic migration has become an effective method to improve network performance. However, for the time-varying resource demand, how to minimize the long-term energy consumption of the network while guaranteeing the Service Level Agreement (SLA) is the key issue that lacks previous research. To tackle this dilemma, this paper proposes an energy-efficient reconfiguration algorithm for VNF based on short-term resource requirement prediction (RP-EDM). Our algorithm uses LSTM to predict VNF resource requirements in advance to eliminate the lag of dynamic migration and determines the timing of migration. RP-EDM eliminates SLA violations by performing VNF separation on potentially overloaded servers and consolidates low-load servers timely to save energy. Meanwhile, we consider the power consumption of servers when booting up, which is existing objectively, to avoid switching on/off the server frequently. The simulation results suggest that RP-EDM has a good performance and stability under machine learning models with different accuracy. Moreover, our algorithm increases the total service traffic by about 15% while ensuring a low SLA interruption rate. The total energy cost is reduced by more than 20% compared with the existing algorithms.

2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Franco Callegati ◽  
Walter Cerroni ◽  
Chiara Contoli

The emerging Network Function Virtualization (NFV) paradigm, coupled with the highly flexible and programmatic control of network devices offered by Software Defined Networking solutions, enables unprecedented levels of network virtualization that will definitely change the shape of future network architectures, where legacy telco central offices will be replaced by cloud data centers located at the edge. On the one hand, this software-centric evolution of telecommunications will allow network operators to take advantage of the increased flexibility and reduced deployment costs typical of cloud computing. On the other hand, it will pose a number of challenges in terms of virtual network performance and customer isolation. This paper intends to provide some insights on how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how it can be used to deploy NFV, focusing in particular on packet forwarding performance issues. To this purpose, a set of experiments is presented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms, considering both single tenant and multitenant scenarios. From the results of the evaluation it is possible to highlight potentials and limitations of running NFV on OpenStack.


2014 ◽  
Vol 40 (5) ◽  
pp. 1621-1633 ◽  
Author(s):  
Yongqiang Gao ◽  
Haibing Guan ◽  
Zhengwei Qi ◽  
Tao Song ◽  
Fei Huan ◽  
...  

2021 ◽  
Vol 11 (22) ◽  
pp. 10547
Author(s):  
Marios Gatzianas ◽  
Agapi Mesodiakaki ◽  
George Kalfas ◽  
Nikos Pleros ◽  
Francesca Moscatelli ◽  
...  

In order to cope with the ever-increasing traffic demands and stringent latency constraints, next generation, i.e., sixth generation (6G) networks, are expected to leverage Network Function Virtualization (NFV) as an enabler for enhanced network flexibility. In such a setup, in addition to the traditional problems of user association and traffic routing, Virtual Network Function (VNF) placement needs to be jointly considered. To that end, in this paper, we focus on the joint network and computational resource allocation, targeting low network power consumption while satisfying the Service Function Chain (SFC), throughput, and delay requirements. Unlike the State-of-the-Art (SoA), we also take into account the Access Network (AN), while formulating the problem as a general Mixed Integer Linear Program (MILP). Due to the high complexity of the proposed optimal solution, we also propose a low-complexity energy-efficient resource allocation algorithm, which was shown to significantly outperform the SoA, by achieving up to 78% of the optimal energy efficiency with up to 742 times lower complexity. Finally, we describe an Orchestration Framework for the automated orchestration of vertical-driven services in Network Slices and describe how it encompasses the proposed algorithm towards optimized provisioning of heterogeneous computation and network resources across multiple network segments.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Ran Xu

Network function virtualization (NFV) is designed to implement network functions by software that replaces proprietary hardware devices in traditional networks. In response to the growing demand of resource-intensive services, for NFV cloud service providers, software-oriented network functions face a number of challenges, such as dynamic deployment of virtual network functions and efficient allocation of multiple resources. This study aims at the dynamic allocation and adjustment of network multiresources and multitype flows for NFV. First, to seek a proactive approach to provision new instances for overloaded VNFs ahead of time, a model called long short-term memory recurrent neural network (LSTM RNN) is proposed to estimate flows in this paper. Then, based on the estimated flow, a cooperative and complementary resource allocation algorithm is designed to reduce resource fragmentation and improve the utilization. The final results demonstrate the advantage of the LSTM model on predicting the network function flow requirements, and our algorithm achieves good results and performance improvement in dynamically expanding network functions and improving resource utilization.


Effort Estimation has been a challenging part of e-Learning due to the growing change in technology. E-Learning industry has to meet dynamic requirements of the customer. Content development has to undergo various stages during its development. During the process, initial SLA (Service Level Agreement) varies frequently and there exists several problems in content delivery. Scope Creep is the result of dynamic expectations from the clients without any limitation to the time. This affect the delivery of the product as the resources that were allocated to the development of the product were according to the initial SLA. This paper discusses about parameters that affect the estimation along with resource requirement computation. The paper further focuses on traditional effort estimation technique while analyzing the scope creep life cycle for e-Learning project. The Investigation is made by considering one of the leading middle level e-Learning organization. The case-study and the statistical analysis are carried out on the data which is collected from the company. From the analysis, amount of resource required to handle the dynamic data can be estimated.


Digital Twin ◽  
2021 ◽  
Vol 1 ◽  
pp. 5
Author(s):  
Xiaowen Sun ◽  
Cheng Zhou ◽  
Xiaodong Duan ◽  
Tao Sun

With the gradual development of the 5G industry network and applications, each industry application has various network performance requirements, while customers hope to upgrade their industrial structures by leveraging 5G technologies. The guarantee of service level agreement (SLA) requirements is becoming more and more important, especially SLA performance indicators, such as delay, jitter, bandwidth, etc. For network operators to fulfill customer’s requirements, emerging network technologies such as time-sensitive networking (TSN), edge computing (EC) and network slicing are introduced into the mobile network to improve network performance, which increase the complexity of the network operation and maintenance (O&M), as well as the network cost. As a result, operators urgently need new solutions to achieve low-cost and high-efficiency network SLA management. In this paper, a digital twin network (DTN) solution is innovatively proposed to achieve the mapping and full lifecycle management of the end-to-end physical network. All the network operation policies such as configuration and modification can be generated and verified inside the digital twin network first to make sure that the SLA requirements can be fulfilled without affecting the related network environment and the performance of the other network services, making network operation and maintenance more effective and accurate.


Author(s):  
Kethavath Prem Kumar ◽  
◽  
Thirumalaisamy Ragunathan ◽  
Devara Vasumathi ◽  
◽  
...  

Cloud Computing is rapidly being utilized to operate informational technological services by outstanding technologies for a variety of benefits, including dynamically improved resources planning and a new service delivery method. The Cloud computing process is occurred by allowing the client devices for data access through the internet from a remote server, computers, and the databases. An internet connection is linked among the front end users such as client device, network, browser, and software application with the back end that constitutes of servers, computers, and database. For satisfying the demands of the Service Level Agreement (SLA), providers of cloud service should reduce the usage of energy. Capacity reservations oriented system is available by clouds’ providers to permit users for customizing Virtual Machines (VMs) having specified age and geographic resources, reduces the amount to be paid for cloud services. To overcome the aforementioned issue, an Improved Spider Monkey Optimization (ISMO) approach is proposed for cloud center optimization. The VM consolidation architecture based on the proposed ISMO algorithm decreases energy usage while attempting to prevent Service Level Agreement breaches. The accessibility of hosts or virtual machines (VMs) for task performance is measured by fitness. If the number of tasks to be handled increases the hosts of VMs available at right state. The proposed VM consolidation architecture decreases energy usage while also attempting to prevent Service Level Agreement breaches and also provide energy-efficient computing in data centers. The proposed approach may be utilized to provide energy-efficient computing in data centers. The energy efficiency of the proposed ISMO method is achieved 28266 whereas, the existing algorithm showed an energy efficiency of 6009 and 10001.


Sign in / Sign up

Export Citation Format

Share Document