Application of a Long Short Term Memory neural predictor with asymmetric loss function for the resource allocation in NFV network architectures

2021 ◽  
Vol 193 ◽  
pp. 108104
Author(s):  
Vincenzo Eramo ◽  
Francesco Giacinto Lavacca ◽  
Tiziana Catena ◽  
Paul Jaime Perez Salazar
2021 ◽  
Author(s):  
ARIF ullah ◽  
Irshad Ahmed Abbasi ◽  
Muhammad Zubair Rehman ◽  
Tanweer Alam ◽  
Hanane Aznaoui

Abstract Infrastructure service model provides different kinds of virtual computing resources such as networking, storage service, and hardware as per user demands. Host load prediction is an important element in cloud computing for improvement in the resource allocation systems. Hosting initialization issues still exist in cloud computing due to this problem hardware resource allocation takes serval minutes of delay in the response process. To solve this issue prediction techniques are used for proper prediction in the cloud data center to dynamically scale the cloud in order for maintaining a high quality of services. Therefore in this paper, we propose a hybrid convolutional neural network long with short-term memory model for host prediction. In the proposed hybrid model, vector auto regression method is firstly used to input the data for analysis which filters the linear interdependencies among the multivariate data. Then the enduring data are computed and entered into the convolutional neural network layer that extracts complex features for each central processing unit and virtual machine usage components after that long short-term memory is used which is suitable for modeling temporal information of irregular trends in time series components. In all process, the main contribution is that we used scaled polynomial constant unit activation function which is most suitable for this kind of model. Due to the higher inconsistency in data center, accurate prediction is important in cloud systems. For this reason in this paper two real-world load traces were used to evaluate the performance. One is the load trace in the Google data center, while the other is in the traditional distributed system. The experiment results show that our proposed method achieves state-of-the-art performance with higher accuracy in both datasets as compared with ARIMA-LSTM, VAR-GRU, VAR-MLP, and CNN models.


Engineering ◽  
2021 ◽  
Vol 13 (03) ◽  
pp. 135-157
Author(s):  
Koné Kigninman Désiré ◽  
Kouassi Adlès Francis ◽  
Konan Hyacinthe Kouassi ◽  
Eya Dhib ◽  
Nabil Tabbane ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
YuKang Jia ◽  
Zhicheng Wu ◽  
Yanyan Xu ◽  
Dengfeng Ke ◽  
Kaile Su

Long Short-Term Memory (LSTM) is a kind of Recurrent Neural Networks (RNN) relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP) is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC) to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers). As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.


Sign in / Sign up

Export Citation Format

Share Document