scholarly journals Mitigation of Insider Attacks through Multi-Cloud

Author(s):  
T Gunasekhar ◽  
K Thirupathi Rao ◽  
V Krishna Reddy ◽  
P Sai Kiran ◽  
B Thirumala Rao

The malicious insider can be an employees, user and/or third party business partner. In cloud environment, clients may store sensitive data about their organization in cloud data centers. The cloud service provider should ensure integrity, security, access control and confidentiality about the stored data at cloud data centers. The malicious insiders can perform stealing on sensitive data at cloud storage and at organizations. Most of the organizations ignoring the insider attack because it is harder to detect and mitigate. This is a major emerging problem at the cloud data centers as well as in organizations. In this paper, we proposed a method that ensures security, integrity, access control and confidentiality on sensitive data of cloud clients by employing multi cloud service providers. The organization should encrypt the sensitive data with their security policy and procedures and store the encrypted data in trusted cloud. The keys which are used during encryption process are again encrypted and stored in another cloud area. So that organization contains only keys for keys of encrypted data. The Administrator of organization also does not know what data kept in cloud area and if he accesses the data, easily caught during the auditing. Hence, the only authorized used can access the data and use it and we can mitigate insider attacks by providing restricted privileges.

Author(s):  
Deepika T. ◽  
Prakash P.

The flourishing development of the cloud computing paradigm provides several services in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.


2016 ◽  
Vol 25 ◽  
pp. 310-317 ◽  
Author(s):  
V. Tresa Mary George ◽  
S. Shamna ◽  
Jubilant J. Kizhakkethottam

2018 ◽  
Vol 56 (2) ◽  
pp. 118-126 ◽  
Author(s):  
Rajat Chaudhary ◽  
Gagangeet Singh Aujla ◽  
Neeraj Kumar ◽  
Joel J.P.C. Rodrigues

Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 218 ◽  
Author(s):  
Aisha Fatima ◽  
Nadeem Javaid ◽  
Ayesha Anjum Butt ◽  
Tanzeela Sultana ◽  
Waqar Hussain ◽  
...  

Cloud computing offers various services. Numerous cloud data centers are used to provide these services to the users in the whole world. A cloud data center is a house of physical machines (PMs). Millions of virtual machines (VMs) are used to minimize the utilization rate of PMs. There is a chance of unbalanced network due to the rapid growth of Internet services. An intelligent mechanism is required to efficiently balance the network. Multiple techniques are used to solve the aforementioned issues optimally. VM placement is a great challenge for cloud service providers to fulfill the user requirements. In this paper, an enhanced levy based multi-objective gray wolf optimization (LMOGWO) algorithm is proposed to solve the VM placement problem efficiently. An archive is used to store and retrieve true Pareto front. A grid mechanism is used to improve the non-dominated VMs in the archive. A mechanism is also used for the maintenance of an archive. The proposed algorithm mimics the leadership and hunting behavior of gray wolves (GWs) in multi-objective search space. The proposed algorithm was tested on nine well-known bi-objective and tri-objective benchmark functions to verify the compatibility of the work done. LMOGWO was then compared with simple multi-objective gray wolf optimization (MOGWO) and multi-objective particle swarm optimization (MOPSO). Two scenarios were considered for simulations to check the adaptivity of the proposed algorithm. The proposed LMOGWO outperformed MOGWO and MOPSO for University of Florida 1 (UF1), UF5, UF7 and UF8 for Scenario 1. However, MOGWO and MOPSO performed better than LMOGWO for UF2. For Scenario 2, LMOGWO outperformed the other two algorithms for UF5, UF8 and UF9. However, MOGWO performed well for UF2 and UF4. The results of MOPSO were also better than the proposed algorithm for UF4. Moreover, the PM utilization rate (%) was minimized by 30% with LMOGWO, 11% with MOGWO and 10% with MOPSO.


Author(s):  
Ayad I. Abdulsada ◽  
Dhafer G. Honi ◽  
Salah Al-Darraji

Many organizations and individuals are attracted to outsource their data into remote cloud service providers. To ensure privacy, sensitive data should be encrypted be-fore being hosted. However, encryption disables the direct application of the essential data management operations like searching and indexing. Searchable encryption is acryptographic tool that gives users the ability to search the encrypted data while being encrypted. However, the existing schemes either serve a single exact search that loss the ability to handle the misspelled keywords or multi-keyword search that generate very long trapdoors. In this paper, we address the problem of designing a practical multi-keyword similarity scheme that provides short trapdoors and returns the correct results according to their similarity scores. To do so, each document is translated intoa compressed trapdoor. Trapdoors are generated using key based hash functions to en-sure their privacy. Only authorized users can issue valid trapdoors. Similarity scores of two textual documents are evaluated by computing the Hamming distance between their corresponding trapdoors. A robust security definition is provided together withits proof. Our experimental results illustrate that the proposed scheme improves thesearch efficiency compared to the existing schemes. Further more, it shows a high level of performance.


2022 ◽  
Vol 25 (1) ◽  
pp. 1-37
Author(s):  
Stefano Berlato ◽  
Roberto Carbone ◽  
Adam J. Lee ◽  
Silvio Ranise

To facilitate the adoption of cloud by organizations, Cryptographic Access Control (CAC) is the obvious solution to control data sharing among users while preventing partially trusted Cloud Service Providers (CSP) from accessing sensitive data. Indeed, several CAC schemes have been proposed in the literature. Despite their differences, available solutions are based on a common set of entities—e.g., a data storage service or a proxy mediating the access of users to encrypted data—that operate in different (security) domains—e.g., on-premise or the CSP. However, the majority of these CAC schemes assumes a fixed assignment of entities to domains; this has security and usability implications that are not made explicit and can make inappropriate the use of a CAC scheme in certain scenarios with specific trust assumptions and requirements. For instance, assuming that the proxy runs at the premises of the organization avoids the vendor lock-in effect but may give rise to other security concerns (e.g., malicious insiders attackers). To the best of our knowledge, no previous work considers how to select the best possible architecture (i.e., the assignment of entities to domains) to deploy a CAC scheme for the trust assumptions and requirements of a given scenario. In this article, we propose a methodology to assist administrators in exploring different architectures for the enforcement of CAC schemes in a given scenario. We do this by identifying the possible architectures underlying the CAC schemes available in the literature and formalizing them in simple set theory. This allows us to reduce the problem of selecting the most suitable architectures satisfying a heterogeneous set of trust assumptions and requirements arising from the considered scenario to a decidable Multi-objective Combinatorial Optimization Problem (MOCOP) for which state-of-the-art solvers can be invoked. Finally, we show how we use the capability of solving the MOCOP to build a prototype tool assisting administrators to preliminarily perform a “What-if” analysis to explore the trade-offs among the various architectures and then use available standards and tools (such as TOSCA and Cloudify) for automated deployment in multiple CSPs.


2022 ◽  
Author(s):  
Tahereh Abbasi-khazaei ◽  
Mohammad Hossein Rezvani

Abstract One of the most important concerns of cloud service providers is balancing renewable and fossil energy consumption. On the other hand, the policy of organizations and governments is to reduce energy consumption and greenhouse gas emissions in cloud data centers. Recently, a lot of research has been conducted to optimize the Virtual Machine (VM) placement on physical machines to minimize energy consumption. Many previous studies have not considered the deadline and scheduling of IoT tasks. Therefore, the previous modelings are mainly not well-suited to the IoT environments where requests are time-constraint. Unfortunately, both the sub-problems of energy consumption minimization and scheduling fall into the category of NP-hard issues. In this study, we propose a multi-objective VM placement to joint minimizing energy costs and scheduling. After presenting a modified memetic algorithm, we compare its performance with baseline methods as well as state-of-the-art ones. The simulation results on the CloudSim platform show that the proposed method can reduce energy costs, carbon footprints, SLA violations, and the total response time of IoT requests.


2020 ◽  
Vol 32 (3) ◽  
pp. 23-36
Author(s):  
Kanniga Devi R. ◽  
Murugaboopathi Gurusamy ◽  
Vijayakumar P.

A Cloud data center is a network of virtualized resources, namely virtualized servers. They provision on-demand services to the source of requests ranging from virtual machines to virtualized storage and virtualized networks. The cloud data center service requests can come from different sources across the world. It is desirable for enhancing Quality of Service (QoS), which is otherwise known as a service level agreement (SLA), an agreement between cloud service requester and cloud service consumer on QoS, to allocate the cloud data center closest to the source of requests. This article models a Cloud data center network as a graph and proposes an algorithm, modified Breadth First Search where the source of requests assigned to the Cloud data centers based on a cost threshold, which limits the distance between them. Limiting the distance between Cloud data centers and the source of requests leads to faster service provisioning. The proposed algorithm is tested for various graph instances and is compared with modified Voronoi and modified graph-based K-Means algorithms that they assign source of requests to the cloud data centers without limiting the distance between them. The proposed algorithm outperforms two other algorithms in terms of average time taken to allocate the cloud data center to the source of requests, average cost and load distribution.


Sign in / Sign up

Export Citation Format

Share Document