Secure Cloud Auditability for Virtual Machines by Adaptive Characterization Using Machine Learning Methods

Author(s):  
Shesagiri Taminana ◽  
◽  
Lalitha Bhaskari ◽  
Arwa Mashat ◽  
Dragan Pamučar ◽  
...  

With the Present days increasing demand for the higher performance with the application developers have started considering cloud computing and cloud-based data centres as one of the prime options for hosting the application. Number of parallel research outcomes have for making a data centre secure, the data centre infrastructure must go through the auditing process. During the auditing process, auditors can access VMs, applications and data deployed on the virtual machines. The downside of the data in the VMs can be highly sensitive and during the process of audits, it is highly complex to permits based on the requests and can increase the total time taken to complete the tasks. Henceforth, the demand for the selective and adaptive auditing is the need of the current research. However, these outcomes are criticised for higher time complexity and less accuracy. Thus, this work proposes a predictive method for analysing the characteristics of the VM applications and the characteristics from the auditors and finally granting the access to the virtual machine by building a predictive regression model. The proposed algorithm demonstrates 50% of less time complexity to the other parallel research for making the cloud-based application development industry a safer and faster place.

2021 ◽  
Author(s):  
Salam Ismaeel

<div>Increasing power efficiency is one of the most important operational factors for any data centre providers. In this context, one of the most useful approaches is to reduce the number of utilized Physical Machines (PMs) through optimal distribution and re-allocation of Virtual Machines (VMs) without affecting the Quality of Service (QoS). Dynamic VMs provisioning makes use of monitoring tools, historical data, prediction techniques, as well as placement algorithms to improve VMs allocation and migration. Consequently, the efficiency of the data centre energy consumption increases.</div><div>In this thesis, we propose an efficient real-time dynamic provisioning framework to reduce energy in heterogeneous data centres. This framework consists of an efficient workload preprocessing, systematic VMs clustering, a multivariate prediction, and an optimal Virtual Machine Placement (VMP) algorithm. Additionally, it takes into consideration VM and user behaviours along with the existing state of PMs. The proposed framework consists of a pipeline successive subsystems. These subsystems could be used separately or combined to improve accuracy, efficiency, and speed of workload clustering, prediction and provisioning purposes.<br></div><div>The pre-processing and clustering subsystems uses current state and historical workload data to create efficient VMs clusters. Efficient VMs clustering include less consumption resources, faster computing and improved accuracy. A modified multivariate Extreme Learning Machine (ELM)-based predictor is used to forecast the number of VMs in each cluster for the subsequent period. The prediction subsystem takes users’ behaviour into consideration to exclude unpredictable VMs requests.<br></div><div>The placement subsystem is a multi-objective placement algorithm based on a novel Machine Condition Index (MCI). MCI represents a group of weighted components that is inclusive of data centre network, PMs, storage, power system and facilities used in any data centre. In this study it will be used to measure the extent to which PM is deemed suitable for handling the new and/or consolidated VM in large scale heterogeneous data centres. It is an efficient tool for comparing server energy consumption used to augment the efficiency and manageability of data centre resources.</div><div> The proposed framework components separately are tested and evaluated with both synthetic and realistic data traces. Simulation results show that proposed subsystems can achieve efficient results as compared to existing algorithms. <br></div>


2021 ◽  
Author(s):  
Salam Ismaeel

<div>Increasing power efficiency is one of the most important operational factors for any data centre providers. In this context, one of the most useful approaches is to reduce the number of utilized Physical Machines (PMs) through optimal distribution and re-allocation of Virtual Machines (VMs) without affecting the Quality of Service (QoS). Dynamic VMs provisioning makes use of monitoring tools, historical data, prediction techniques, as well as placement algorithms to improve VMs allocation and migration. Consequently, the efficiency of the data centre energy consumption increases.</div><div>In this thesis, we propose an efficient real-time dynamic provisioning framework to reduce energy in heterogeneous data centres. This framework consists of an efficient workload preprocessing, systematic VMs clustering, a multivariate prediction, and an optimal Virtual Machine Placement (VMP) algorithm. Additionally, it takes into consideration VM and user behaviours along with the existing state of PMs. The proposed framework consists of a pipeline successive subsystems. These subsystems could be used separately or combined to improve accuracy, efficiency, and speed of workload clustering, prediction and provisioning purposes.<br></div><div>The pre-processing and clustering subsystems uses current state and historical workload data to create efficient VMs clusters. Efficient VMs clustering include less consumption resources, faster computing and improved accuracy. A modified multivariate Extreme Learning Machine (ELM)-based predictor is used to forecast the number of VMs in each cluster for the subsequent period. The prediction subsystem takes users’ behaviour into consideration to exclude unpredictable VMs requests.<br></div><div>The placement subsystem is a multi-objective placement algorithm based on a novel Machine Condition Index (MCI). MCI represents a group of weighted components that is inclusive of data centre network, PMs, storage, power system and facilities used in any data centre. In this study it will be used to measure the extent to which PM is deemed suitable for handling the new and/or consolidated VM in large scale heterogeneous data centres. It is an efficient tool for comparing server energy consumption used to augment the efficiency and manageability of data centre resources.</div><div> The proposed framework components separately are tested and evaluated with both synthetic and realistic data traces. Simulation results show that proposed subsystems can achieve efficient results as compared to existing algorithms. <br></div>


ITNOW ◽  
2021 ◽  
Vol 63 (4) ◽  
pp. 18-20
Author(s):  
John Booth

Abstract John Booth MBCS, Data Centre Energy Efficiency and Sustainability Consultant at Carbon3IT, explores the detrimental trajectory of data centre energy use, against a backdrop of COP26, climate change and proposed EU directives.


10.29007/h27x ◽  
2019 ◽  
Author(s):  
Mohammed Alasmar ◽  
George Parisis

In this paper we present our work towards an evaluation platform for data centre transport protocols. We developed a simulation model for NDP1, a modern data transport protocol in data centres, a FatTree network topology and per-packet ECMP load balancing. We also developed a data centre environment that can be used to evaluate and compare data transport protocols, usch as NDP and TCP. We describe how we integrated our model with the INET Framework and present example simulations to showcase the workings of the developed framework. For that, we ran a comprehensive set of experiments and studied different components and parameters of the developed models.


Author(s):  
Aleksandra Kostic-Ljubisavljevic ◽  
Branka Mikavica

All vertically integrated participants in content provisioning process are influenced by bandwidth requirements. Provisioning of self-owned resources that satisfy peak bandwidth demand leads to network underutilization and it is cost ineffective. Under-provisioning leads to rejection of customers' requests. Vertically integrated providers need to consider cloud migration in order to minimize costs and improve Quality of Service and Quality of Experience of their customers. Cloud providers maintain large-scale data centres to offer storage and computational resources in the form of Virtual Machines instances. They offer different pricing plans: reservation, on-demand and spot pricing. For obtaining optimal integration charging strategy, Revenue Sharing, Cost Sharing, Wholesale Price is applied frequently. The vertically integrated content provider's incentives for cloud migration can induce significant complexity in integration contracts, and consequently improvements in costs and requests' rejection rate.


2016 ◽  
pp. 709-732
Author(s):  
Rostyslav Zabolotnyi ◽  
Philipp Leitner ◽  
Schahram Dustdar

Cloud computing is gaining increasing attention from the industry and research; however, there is a lack of advanced Cloud software development tools. While Platform as a Service (PaaS) brings convenient software development platform for application development, it often comes with limitations in terms of application architecture functionality and requires provider lock-in. The Infrastructure as a Service (IaaS) model may sound like a solution to these problems by enabling application development freedom; however, it necessitates operation at the lower level of virtual machines and snapshots. In this chapter, the authors present CloudScale: a low-overhead middleware framework that migrates Java applications seamlessly to the Cloud with minimal changes in the application code. They focus on the main ideas behind CloudScale and its influence on solving Cloud software development and deployment problems with minimal overhead and Cloud-awareness required from developers.


2017 ◽  
Vol 14 (4) ◽  
pp. 1-32 ◽  
Author(s):  
Shashank Gupta ◽  
B. B. Gupta

This article introduces a distributed intelligence network of Fog computing nodes and Cloud data centres for smart devices against XSS vulnerabilities in Online Social Network (OSN). The cloud data centres compute the features of JavaScript, injects them in the form of comments and saved them in the script nodes of Document Object Model (DOM) tree. The network of Fog devices re-executes the feature computation and comment injection process in the HTTP response message and compares such comments with those calculated in the cloud data centres. Any divergence observed will simply alarm the signal of injection of XSS worms on the nodes of fog located at the edge of the network. The mitigation of such worms is done by executing the nested context-sensitive sanitization on the malicious variables of JavaScript code embedded in such worms. The prototype of the authors' work was developed in Java development framework and installed on the virtual machines of Cloud data centres (typically located at the core of network) and the nodes of Fog devices (exclusively positioned at the edge of network). Vulnerable OSN-based web applications were utilized for evaluating the XSS worm detection capability of the authors' framework and evaluation results revealed that their work detects the injection of XSS worms with high precision rate and less rate of false positives and false negatives.


Author(s):  
Mhafuzul Islam ◽  
Mizanur Rahman ◽  
Sakib Mahmud Khan ◽  
Mashrur Chowdhury ◽  
Lipika Deka

Connected vehicle (CV) application developers need a development platform to build, test, and debug real-world CV applications, such as safety, mobility, and environmental applications, in edge-centric cyber-physical system (CPS). The objective of this paper is to develop and evaluate a scalable and secure CV application development platform (CVDeP) that enables application developers to build, test, and debug CV applications in real-time while meeting the functional requirements of any CV applications. The efficacy of the CVDeP was evaluated using two types of CV applications (one safety and one mobility application) and they were validated through field experiments at the South Carolina Connected Vehicle Testbed (SC-CVT). The analyses show that the CVDeP satisfies the functional requirements in relation to latency and throughput of the selected CV applications while maintaining the scalability and security of the platform and applications.


Sign in / Sign up

Export Citation Format

Share Document