application hosting
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Andreas Tsagkaropoulos ◽  
Yiannis Verginadis ◽  
Nikos Papageorgiou ◽  
Fotis Paraskevopoulos ◽  
Dimitris Apostolou ◽  
...  

AbstractWhile a multitude of cloud vendors exist today offering flexible application hosting services, the application adaptation capabilities provided in terms of autoscaling are rather limited. In most cases, a static adaptation action is used having a fixed scaling response. In the cases that a dynamic adaptation action is provided, this is based on a single scaling variable. We propose Severity, a novel algorithmic approach aiding the adaptation of cloud applications. Based on the input of the DevOps, our approach detects situations, calculates their Severity and proposes adaptations which can lead to better application performance. Severity can be calculated for any number of application QoS attributes and any type of such attributes, whether bounded or unbounded. Evaluation with four distinct workload types and a variety of monitoring attributes shows that QoS for particular application categories is improved. The feasibility of our approach is demonstrated with a prototype implementation of an application adaptation manager, for which the source code is provided.


2021 ◽  
Author(s):  
Andreas Tsagkaropoulos ◽  
Yiannis Verginadis ◽  
Nikos Papageorgiou ◽  
Fotis Paraskevopoulos ◽  
Dimitris Apostolou ◽  
...  

Abstract While a multitude of cloud vendors exist today offering flexible application hosting services, the application adaptation capabilities provided in terms of autoscaling are rather limited. In most cases, a static adaptation action is used having a fixed scaling response. In the cases that a dynamic adaptation action is provided, this is based on a single scaling variable. We propose Severity, a novel algorithmic approach aiding the adaptation of cloud applications. Based on the input of the DevOps, our approach detects situations, calculates their Severity and proposes adaptations which can lead to better application performance. Severity can be calculated for any number of application QoS attributes and any type of such attributes, whether bounded or unbounded. Evaluation with four distinct workload types and a variety of monitoring attributes shows that QoS for particular application categories is improved. The efficacy of our approach is demonstrated with a prototype implementation of an application adaptation manager, for which the source code is provided.


With the unbounded growth in the infrastructures for application hosting, demand from the consumers of the applications and the trade-off between the application availability with cost for application hosting is pushing the application providers towards cloud.The support from the cloud computing towards the application development are the dynamic load balancing, saving cost for energy and in premises hardware dependency. The dynamic load balancing is made possible by the concept of migrations of virtual machines or VMs. The migration includes identification of high loads on specific hosts, identification of possible virtual machines to be migrated and possible target hosts, where the VMs can be migrated. The challenge of migrating the virtual machines from the source to the destination physical hosts is during the migration process the virtual machines are exposed to the network and the other users available or have access to the same communication channels. Also, the virtual machines data to be made live in the target physical system, popularly called the VM images, are treated as regular files before those are live. Hence in the migration cycle, starting from the transfer of the virtual machine image and till the transferred virtual machine is live, there is a gap of security and which needs to be filled. The virtual machine images often contain the application, data generated by the application and data to be consumed by the application. Regardless to mention these three components are critical and must be prevented from the unauthorised access. Hence, a number of research attempts have proposed various schemes to secure the VM images by employing various encryption mechanisms. There methods are criticised for consuming high amount of computing capabilities for encrypting – decrypting VM images and resulting into violation of service level agreements by making the application not available for higher time.Thus, this work proposes a novel method for encrypting a higher volume VM image in less time by deploying a progressive and adaptive encryption method. The work also establishes the thought of the improvement by testing the algorithm in the light of SLA violation reduction compared with existing methods


Cloud computing can be defined as a computing paradigm, where the various systems and large pool are connected to each other in private or public networks. The aim for that is to provide a dynamically scalable infrastructure, where it is used for applications, data and file storage. Cloud computing reduced the cost of computation and application hosting so that content storage and delivering services are handled faster and more flexibility. Load balancing is one of the challenges that affect the performance of cloud computing and the overcome it leads to better resource utilization and response time. The service broker policy plays an important role in accelerating the response time of customer requests by locating data centers or optimize the pattern of access to them. The contribution of this paper investigates the effectiveness of using the different algorithms and the approaches to improve the performance of cloud computing as it has been shown that there is a possibility to increase the performance of cloud computing by relying on certain criteria described in this paper. The results, which are presented in this paper were obtained using the cloud analyst simulator, where this simulator contains (Time duration, Load balancing algorithms, and Service Broker Algorithms, etc).


2019 ◽  
Vol 01 (02) ◽  
pp. 40-48 ◽  
Author(s):  
Bindhu V ◽  
Vijesh joe S

Cloud computing provides information technology based services based on utilities to users across the globe. Application hosting for business, consumers and scientific domains is made possible with cloud computing. However, the data centres used for this purpose consume large amount of energy. This increases the operational cost and contribute carbon footprint affecting the environment. With increasing energy scarcities and global climate changes, it is essential to control power consumption. This paper presents a green cloud computing solution that addresses the issues of operational cost reduction and decreasing the carbon footprint and its impact on the environment. For this purpose we use data mining tools and auto-scaling with constraint satisfaction problems (CSP).


Author(s):  
Mohammad Sadegh Aslanpour ◽  
Seyed Ebrahim Dashti

Application providers (APs) leave their application hosting to cloud with the aim of reducing infrastructure purchase and maintenance costs. However, variation in the arrival rate of user application requests on the one hand, and the attractive cloud resource auto-scaling feature on the other hand, has made APs consider further savings in the cost of renting resources. Researchers generally seek to select parameters for scaling decision making, while it seems that analysis of the parameter history is more effective. This paper presents a proactive auto-scaling algorithm (PASA) equipped with a heuristic predictor. The predictor analyzes history with the help of the following techniques: (1) double exponential smoothing - DES, (2) weighted moving average - WMA and (3) Fibonacci numbers. The results of PASA simulation in CloudSim is indicative of its effectiveness in a way that the algorithm can reduce the AP's cost while maintaining web user satisfaction.


2017 ◽  
Vol 10 (1) ◽  
pp. 42-66 ◽  
Author(s):  
Frank Ulbrich ◽  
Mark Borman

Purpose Organizations increasingly form or join collaborations to gain access to resources paramount for achieving a sustained competitive advantage. This paper aims to propose an extension to the established dependency network diagram (DND) technique to better facilitate analysis, design and, ultimately, strategic management of such collaborations. Design/methodology/approach Based on the resource dependence theory, the constructs of power and secondary dependency are operationalized and integrated into the original DND technique. New rules and an updated algorithm for how to construct extended DNDs are provided. Findings The value of the proposed extension of the DND technique is illustrated by analysis of an application hosting collaboration case study from the Australian financial service industry. Research limitations/implications This study provides preliminary evidence for strategically managing resource collaborations. Future research could further test empirically the usefulness of the proposed extension of the DND technique and how much it contributes to better understanding resource collaborations. Practical implications The proposed extension of the DND technique enables managers to perform a broader analysis of dependencies among participants in a collaboration, helping them to more accurately comprehend the relationships between the entities in their collaborative environment and, thus, being in a better position of strategically managing resource dependencies. Originality/value The proposed extension of the DND technique makes a central contribution to the extant literature by adding a strategic dimension to a visualization technique used to represent collaborative environments.


2013 ◽  
Vol 79 (8) ◽  
pp. 1214-1229 ◽  
Author(s):  
Xuanhua Shi ◽  
Hongbo Jiang ◽  
Ligang He ◽  
Hai Jin ◽  
Chonggang Wang ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document