Cloud Computing Virtual Machine Workload Prediction Method Based on Variational Autoencoder

Author(s):  
Fargana J. Abdullayeva

The paper proposes a method for predicting the workload of virtual machines in the cloud infrastructure. Reconstruction probabilities of variational autoencoders were used to provide the prediction. Reconstruction probability is a probability criterion that considers the variability in the distribution of variables. In the proposed approach, the values of the reconstruction probabilities of the variational autoencoder show the workload level of the virtual machines. The results of the experiments showed that variational autoencoders gave better results in predicting the workload of virtual machines compared to simple deep neural networks. The generative characteristics of the variational autoencoders determine the workload level by the data reconstruction.

2018 ◽  
Vol 11 (4) ◽  
pp. 137-154 ◽  
Author(s):  
Lei Li ◽  
Min Feng ◽  
Lianwen Jin ◽  
Shenjin Chen ◽  
Lihong Ma ◽  
...  

Online services are now commonly deployed via cloud computing based on Infrastructure as a Service (IaaS) to Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). However, workload is not constant over time, so guaranteeing the quality of service (QoS) and resource cost-effectiveness, which is determined by on-demand workload resource requirements, is a challenging issue. In this article, the authors propose a neural network-based-method termed domain knowledge embedding regularization neural networks (DKRNN) for large-scale workload prediction. Based on analyzing the statistical properties of a real large-scale workload, domain knowledge, which provides extended information about workload changes, is embedded into artificial neural networks (ANN) for linear regression to improve prediction accuracy. Furthermore, the regularization with noisy is combined to improve the generalization ability of artificial neural networks. The experiments demonstrate that the model can achieve more accuracy of workload prediction, provide more adaptive resource for higher resource cost effectiveness and have less impact on the QoS.


2020 ◽  
Vol 185 ◽  
pp. 02025
Author(s):  
Guo Yanan ◽  
Cao Xiaoqun ◽  
Peng Kecheng

Atmospheric systems are typically chaotic and their chaotic nature is an important limiting factor for weather forecasting and climate prediction. So far, there have been many studies on the simulation and prediction of chaotic systems using numerical simulation methods. However, there are many intractable problems in predicting chaotic systems using numerical simulation methods, such as initial value sensitivity, error accumulation, and unreasonable parameterization of physical processes, which often lead to forecast failure. With the continuous improvement of observational techniques, data assimilation has gradually become an effective method to improve the numerical simulation prediction. In addition, with the advent of big data and the enhancement of computing resources, machine learning has achieved great success. Studies have shown that deep neural networks are capable of mining and extracting the complex physical relationships behind large amounts of data to build very good forecasting models. Therefore, in this paper, we propose a prediction method for chaotic systems that combines deep neural networks and data assimilation. To test the effectiveness of the method, we use the model to perform forecasting experiments on the Lorenz96 model. The experimental results show that the prediction method that combines neural network and data assimilation is very effective in predicting the amount of state of Lorenz96. However, Lorenz96 is a relatively simple model, and our next step will be to continue the experiments on the complex system model to test the effectiveness of the proposed method in this paper and to further optimize and improve the proposed method.


Measurement ◽  
2021 ◽  
Vol 172 ◽  
pp. 108878
Author(s):  
Hua Ding ◽  
Liangliang Yang ◽  
Zeyin Cheng ◽  
Zhaojian Yang

The cloud computing paradigm has settled to a stable stage. Due to its enormous advantages, services based on cloud computing are getting more and more attraction and adoption by diversified sectors of society. Because of its pay per use model, people prefer to execute various data crunching operations on high end virtual machines. Optimized resource management however becomes critical in such scenarios. Poor management of cloud resources may affect not only customer satisfaction but also wastage of available cloud infrastructure. An optimized resource sharing mechanism for collaborated cloud computing environments is suggested here. The suggested resource sharing technique solves starvation issue in inter cloud load balancing context. In case of occurrence of starvation problem, the suggested technique resolves the issue by switching under loaded and overloaded virtual machines between intra cloud and inter cloud computing environment.


The advancements in the cloud computing has gained the attention of several researchers to provide on-demand network access to users with shared resources. Cloud computing is important a research direction that can provide platforms and softwares to clients using internet. But, handling huge number of tasks in cloud infrastructure is a complicated task. Thus, it needs a load balancing method for allocating tasks to Virtual Machines (VMs) without influencing system performance. This paper proposes a load balancing technique, named Elephant Herd Grey Wolf Optimization (EHGWO) for balancing the loads. The proposed EHGWO is designed by integrating Elephant Herding Optimization (EHO) in Grey Wolf Optimizer (GWO) for selecting the optimal VMs for reallocation based on newly devised fitness function. The proposed load balancing technique considers different parameters of VMs and PMs for selecting the tasks to initiate the reallocation for load balancing. Here, two pick factors, named Task Pick Factor (TPF) and VM Pick Factor (VPF), are considered for allocating the tasks to balance the loads.


Author(s):  
Partha Ghosh ◽  
Arpan Losalka ◽  
Michael J. Black

Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success till now. Two distinct categories of samples against which deep neural networks are vulnerable, “adversarial samples” and “fooling samples”, have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can defend against them both under a unified framework. Our model has the form of a variational autoencoder with a Gaussian mixture prior on the latent variable, such that each mixture component corresponds to a single class. We show how selective classification can be performed using this model, thereby causing the adversarial objective to entail a conflict. The proposed method leads to the rejection of adversarial samples instead of misclassification, while maintaining high precision and recall on test data. It also inherently provides a way of learning a selective classifier in a semi-supervised scenario, which can similarly resist adversarial attacks. We further show how one can reclassify the detected adversarial samples by iterative optimization.1


Sign in / Sign up

Export Citation Format

Share Document