cloud provider
Recently Published Documents


TOTAL DOCUMENTS

272
(FIVE YEARS 148)

H-INDEX

10
(FIVE YEARS 4)

2022 ◽  
Vol 14 (1) ◽  
pp. 0-0

Cloud computing enables on-demand access to a public resource pool. Many businesses are migrating to the cloud due to its popularity and financial benefits. As a result, finding a suitable and best Cloud Service Provider is a difficult task for all cloud users. Many ranking systems, such as ANP, AHP and TOPSIS, have been proposed in the literature .However, many of the studies concentrated on quantitative data. But qualitative attributes are equally significant in many applications where the user is more concerned with the qualitative features.The implementation of MCDM approach for the ranking and the selection of the best player in the market as per the qualitative need of the cloud users like business organization or cloud brokers is the aim of this article. An ISO approved standard SMI framework is available for the evaluation of the CSPs.The authors have considered SMI attributes like accountability and security as the criteria for evaluation of the CSPs. The MCDM approach called IVF-TOPSIS that can handle the inherent vagueness in the cloud dataset is implemented in this work


Every cloud provider, wishes to provide 99.9999% availabil- ity for the systems provisioned and operated by them for the customer i.e. may it be SaaS or PaaS or IaaS model, the availability of the system must be greater than 99.9999%.It becomes vital for the provider to mon- itor the systems and take proactive measures to reduce the downtime.In an ideal scenario, the support colleagues (24*7 technical support) must be aware of the on-going issues in the production systems before it is raised as an incident by the customer. But currently, there is no effective alert monitoring solutions for the same. The proposed solution presented in this paper is to have a central alert monitoring tool for all cloud so- lutions offered by the cloud provider. The central alert monitoring tool constantly observes the time series database which contains metric val- ues populated by HA and compares the incoming metric values with the defined thresholds. When a metric value exceeds the defined threshold, using machine learning techniques the monitoring tool decides & takes actions.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 84
Author(s):  
Andrei Tchernykh ◽  
Mikhail Babenko ◽  
Arutyun Avetisyan ◽  
Alexander Yu. Drozdov

Storage-as-a-service offers cost savings, convenience, mobility, scalability, redundant locations with a backup solution, on-demand with just-in-time capacity, syncing and updating, etc. While this type of cloud service has opened many opportunities, there are important considerations. When one uses a cloud provider, their data are no longer on their controllable local storage. Thus, there are the risks of compromised confidentiality and integrity, lack of availability, and technical failures that are difficult to predict in advance. The contribution of this paper can be summarized as follows: (1) We propose a novel mechanism, En-AR-PRNS, for improving reliability in the configurable, scalable, reliable, and secure distribution of data storage that can be incorporated along with storage-as-a-service applications. (2) We introduce a new error correction method based on the entropy (En) paradigm to correct hardware and software malfunctions, integrity violation, malicious intrusions, unexpected and unauthorized data modifications, etc., applying a polynomial residue number system (PRNS). (3) We use the concept of an approximation of the rank (AR) of a polynomial to reduce the computational complexity of the decoding. En-AR-PRNS combines a secret sharing scheme and error correction codes with an improved multiple failure detection/recovery mechanism. (4) We provide a theoretical analysis supporting the dynamic storage configuration to deal with varied user preferences and storage properties to ensure high-quality solutions in a non-stationary environment. (5) We discuss approaches to efficiently exploit parallel processing for security and reliability optimization. (6) We demonstrate that the reliability of En-AR-PRNS is up to 6.2 times higher than that of the classic PRNS.


2021 ◽  
Vol 4 (4) ◽  
pp. 366-376
Author(s):  
Oleg N. Galchonkov ◽  
Mykola I. Babych ◽  
Andrey V. Plachinda ◽  
Anastasia R. Majorova

The transition of more and more companies from their own computing infrastructure to the clouds is due to a decrease in the cost of maintaining it, the broadest scalability, and the presence of a large number of tools for automating activities. Accordingly, cloud providers provide an increasing number of different computing resources and tools for working in the clouds. In turn, this gives rise to the problem of the rational choice of the types of cloud services in accordance with the peculiarities of the tasks to be solved. One of the most popular areas of effort for cloud consumers is to reduce rental costs. The main base of this direction is the use of spot resources. The article proposes a method for reducing the cost of renting computing resources in the cloud by dynamically managing the placement of computational tasks, which takes into account the possible underutilization of planned resources, the forecast of the appearance of spot resources and their cost. For each task, a state vector is generated that takes into account the duration of the task and the required deadline. Accordingly, for a suitable set of computing resources, an availability forecast vectors are formed at a given time interval, counting from the current moment in time. The technique proposes to calculate at each discrete moment of time the most rational option for placing the task on one of the resources and the delay in starting the task on it. The placement option and launch delays are determined by minimizing the rental cost function over the time interval using a genetic algorithm. One of the features of using spot resources is the auction mechanism for their provision by a cloud provider. This means that if there are more preferable rental prices from any consumer, then the provider can warn you about the disconnection of the resource and make this disconnection after the announced time. To minimize the consequences of such a shutdown, the technique involves preliminary preparation of tasks by dividing them into substages with the ability to quickly save the current results in memory and then restart from the point of stop. In addition, to increase the likelihood that the task will not be interrupted, a price forecast for the types of resources used is used and a slightly higher price is offered for the auction of the cloud provider, compared to the forecast. Using the example of using the Elastic Cloud Computing (EC2) environment of the cloud provider AWS, the effectiveness of the proposed method is shown.


Author(s):  
Er. Krishan Kumar ◽  
Shipra

This research revolves around understanding the Cloud Storage Services offered by world's most famous Cloud Provider Amazon Web Services (AWS). We will be covering major Cloud Storage Services like EBS, S3 and EFS. But first let’s understand more about AWS. We should use these end-of-life services as a per-project and keep in mind the key benefits of these end-to-end services. Amazon EBS brings the highest end-to-end prices available with block for level of Amazon Elastic Compute Cloud (EC2) instances. Saves data to file system stored after EC2 status closure. Amazon EFS provides portable file storage, also designed for EC2. It can be used as a standard data source for any application or load that works in most cases. Using the EFS file system, you can configure file system installation settings. The main difference between EBS and EFS is that EBS is only accessible from a single EC2 state in your specific AWS region, while EFS allows you to mount a file system in multiple regions and scenarios.


Author(s):  
Aleksandar Tošic ◽  
Jernej Vičič

To anonymous internet traffic, many popular protocols route traffic through a network of nodes in order to conceal information about the request. However, routing traffic through other nodes inherently introduces added latency. Over the past two decades, there were many attempts to improve the path selection in order to decrease latency with little or no trade-off in terms of security, and anonymity. In this paper, we show the potential use of geo-sharding in decentralized routing networks to improve fault-tolerance, and latency. Such networks can be used as a communication layer for Edge devices computing huge amounts of data. Specifically, we focus our work on Low Latency Anonymous Routing Protocol (LLARP), a protocol built on top of Oxen blockchain that aims to achieve internet privacy. We analyse the existing network of Service Nodes(SN), observe cloud provider centralisation, and propose a high level protocol that provides incentives for a better geographical distribution mitigating potential cloud provider/country wide service dropouts. Additionally, the protocol level information about geographical location can be used to improve client’s path (the string of nodes that will participate in the transaction) selection, decreasing network latency. We show the feasibility of our approach by comparing it with the random path selection in a simulated environment. We observe marginal drops in average latency when selecting paths geographically closer to each other.


Author(s):  
Adrián Bernal ◽  
M. Emilia Cambronero ◽  
Alberto Núñez ◽  
Pablo C. Cañizares ◽  
Valentín Valero

AbstractIn this paper, we investigate how to improve the profits in cloud infrastructures by using price schemes and analyzing the user interactions with the cloud provider. For this purpose, we consider two different types of client behavior, namely regular and high-priority users. Regular users do not require a continuous service, and they can wait to be attended to. In contrast, high-priority users require a continuous service, e.g., a 24/7 service, and usually need an immediate answer to any request. A complete framework has been implemented, which includes a UML profile that allows us to define specific cloud scenarios and the automatic transformations to produce the code for the cloud simulations in the Simcan2Cloud simulator. The engine of Simcan2Cloud has also been modified by adding specific SLAs and price schemes. Finally, we present a thorough experimental study to analyze the performance results obtained from the simulations, thus making it possible to draw conclusions about how to improve the cloud profit for the cloud studied by adjusting the different parameters and resource configuration.


2021 ◽  
Vol 2089 (1) ◽  
pp. 012074
Author(s):  
Shaik Sumi Anju ◽  
BSN Sravani ◽  
Srinivasa Rao Madala

Abstract As data processing advances, decentralized media has been widely recognized for its ability to store large amounts of data. By comparing a revisited content to a dispersed repository, a cloud provider may verify the document’s integrity without having to retrieve it. A reconsidered examining strategy is offered to lead the customer to reconsider the significant assessing task to third sector inspector, taking into consideration the important computing cost brought up by the checking process (TPA). TPA may be deterred by the primary revisited evaluating strategy, but the second plan gives the harmful organization the right of inspection over the readdressed data of users, which poses a significant risk to patient privacy. Human Emphasis for reconsidered inspection is presented in this work, which emphasizes that the service user can be overwhelmed by her own data. Based on user-centered design, our suggested methodology not only prevents patient’s data from leaking to TPA without depending on cryptographic algorithms, but can also avoid the use of additional free unpredictable supply that is impossible to fulfill on a daily basis. Also, we start to make our approach work with continuous changes. Our recommended scheme is both verifiably safe and essentially productive, as shown by the privacy analysis and test evaluations.


2021 ◽  
Author(s):  
Sebastian Perez-Salazar ◽  
Ishai Menache ◽  
Mohit Singh ◽  
Alejandro Toriello

Motivated by maximizing spot instances in cloud shared systems, in this work, we consider the problem of taking advantage of unused resources in highly dynamic cloud environments while preserving users’ performance. We introduce an online model for sharing resources that captures basic properties of cloud systems, such as unpredictable users’ demand patterns, very limited feedback from the system, and service level agreement (SLA) between the users and the cloud provider. We provide a simple and efficient algorithm for the single-resource case. For any demand patterns, our algorithm guarantees near-optimal resource utilization as well as high users’ performance compared with their SLA baseline. In addition to this, we validate empirically the performance of our algorithm using synthetic data and data obtained from Microsoft’s systems.


Sign in / Sign up

Export Citation Format

Share Document