scholarly journals APPLICATION OF CLOUD COMPUTING TECNOLOGY FOR SOLVING THE PROBLEM OF ACCEPTABILITY REGIONS CONSTRUCTION AND ANALYSIS

2021 ◽  
pp. 53-66
Author(s):  
O.V. Abramov ◽  
◽  
D.A. Nazarov ◽  

The article deals with the application of cloud computing to solve the problems of construction and analysis of engineering system acceptability regions (AR). It is claimed that the cloud plat-form is the most appropriate architecture for consolidating high-performance computing re-sources, modeling software and data warehouses in multi-user mode. The main components of the cloud environment for (AR) construction and analysis within the framework of the SaaS and DaaS cloud service models are discussed.

Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


2017 ◽  
Vol 10 (13) ◽  
pp. 445
Author(s):  
Purvi Pathak ◽  
Kumar R

High-performance computing (HPC) applications require high-end computing systems, but not all scientists have access to such powerful systems. Cloud computing provides an opportunity to run these applications on the cloud without the requirement of investing in high-end parallel computing systems. We can analyze the performance of the HPC applications on private as well as public clouds. The performance of the workload on the cloud can be calculated using different benchmarking tools such as NAS parallel benchmarking and Rally. The workloads of HPC applications require use of many parallel computing systems to be run on a physical setup, but this facility is available on cloud computing environment without the need of investing in physical machines. We aim to analyze the ability of the cloud to perform well when running HPC workloads. We shall get the detailed performance of the cloud when running these applications on a private cloud and find the pros and cons of running HPC workloads on cloud environment.


2020 ◽  
Vol 13 (3) ◽  
pp. 313-318 ◽  
Author(s):  
Dhanapal Angamuthu ◽  
Nithyanandam Pandian

<P>Background: The cloud computing is the modern trend in high-performance computing. Cloud computing becomes very popular due to its characteristic of available anywhere, elasticity, ease of use, cost-effectiveness, etc. Though the cloud grants various benefits, it has associated issues and challenges to prevent the organizations to adopt the cloud. </P><P> Objective: The objective of this paper is to cover the several perspectives of Cloud Computing. This includes a basic definition of cloud, classification of the cloud based on Delivery and Deployment Model. The broad classification of the issues and challenges faced by the organization to adopt the cloud computing model are explored. Examples for the broad classification are Data Related issues in the cloud, Service availability related issues in cloud, etc. The detailed sub-classifications of each of the issues and challenges discussed. The example sub-classification of the Data Related issues in cloud shall be further classified into Data Security issues, Data Integrity issue, Data location issue, Multitenancy issues, etc. This paper also covers the typical problem of vendor lock-in issue. This article analyzed and described the various possible unique insider attacks in the cloud environment. </P><P> Results: The guideline and recommendations for the different issues and challenges are discussed. The most importantly the potential research areas in the cloud domain are explored. </P><P> Conclusion: This paper discussed the details on cloud computing, classifications and the several issues and challenges faced in adopting the cloud. The guideline and recommendations for issues and challenges are covered. The potential research areas in the cloud domain are captured. This helps the researchers, academicians and industries to focus and address the current challenges faced by the customers.</P>


Author(s):  
Mohammed Radi ◽  
Ali Alwan ◽  
Abedallah Abualkishik ◽  
Adam Marks ◽  
Yonis Gulzar

Cloud computing has become a practical solution for processing big data. Cloud service providers have heterogeneous resources and offer a wide range of services with various processing capabilities. Typically, cloud users set preferences when working on a cloud platform. Some users tend to prefer the cheapest services for the given tasks, whereas other users prefer solutions that ensure the shortest response time or seek solutions that produce services ensuring an acceptable response time at a reasonable cost. The main responsibility of the cloud service broker is identifying the best data centre to be used for processing user requests. Therefore, to maintain a high level of quality of service, it is necessity to develop a service broker policy that is capable of selecting the best data centre, taking into consideration user preferences (e.g. cost, response time). This paper proposes an efficient and cost-effective plan for a service broker policy in a cloud environment based on the concept of VIKOR. The proposed solution relies on a multi-criteria decision-making technique aimed at generating an optimized solution that incorporates user preferences. The simulation results show that the proposed policy outperforms most recent policies designed for the cloud environment in many aspects, including processing time, response time, and processing cost. KEYWORDS Cloud computing, data centre selection, service broker, VIKOR, user priorities


2021 ◽  
Vol 17 (2) ◽  
pp. 179-195
Author(s):  
Priyanka Bharti ◽  
Rajeev Ranjan ◽  
Bhanu Prasad

Cloud computing provisions and allocates resources, in advance or real-time, to dynamic applications planned for execution. This is a challenging task as the Cloud-Service-Providers (CSPs) may not have sufficient resources at all times to satisfy the resource requests of the Cloud-Service-Users (CSUs). Further, the CSPs and CSUs have conflicting interests and may have different utilities. Service-Level-Agreement (SLA) negotiations among CSPs and CSUs can address these limitations. User Agents (UAs) negotiate for resources on behalf of the CSUs and help reduce the overall costs for the CSUs and enhance the resource utilization for the CSPs. This research proposes a broker-based mediation framework to optimize the SLA negotiation strategies between UAs and CSPs in Cloud environment. The impact of the proposed framework on utility, negotiation time, and request satisfaction are evaluated. The empirical results show that these strategies favor cooperative negotiation and achieve significantly higher utilities, higher satisfaction, and faster negotiation speed for all the entities involved in the negotiation.


Author(s):  
Adrian Jackson ◽  
Michèle Weiland

This chapter describes experiences using Cloud infrastructures for scientific computing, both for serial and parallel computing. Amazon’s High Performance Computing (HPC) Cloud computing resources were compared to traditional HPC resources to quantify performance as well as assessing the complexity and cost of using the Cloud. Furthermore, a shared Cloud infrastructure is compared to standard desktop resources for scientific simulations. Whilst this is only a small scale evaluation these Cloud offerings, it does allow some conclusions to be drawn, particularly that the Cloud can currently not match the parallel performance of dedicated HPC machines for large scale parallel programs but can match the serial performance of standard computing resources for serial and small scale parallel programs. Also, the shared Cloud infrastructure cannot match dedicated computing resources for low level benchmarks, although for an actual scientific code, performance is comparable.


Green computing is a contemporary research topic to address climate and energy challenges. In this chapter, the authors envision the duality of green computing with technological trends in other fields of computing such as High Performance Computing (HPC) and cloud computing on one hand and economy and business on the other hand. For instance, in order to provide electricity for large-scale cloud infrastructures and to reach exascale computing, we need huge amounts of energy. Thus, green computing is a challenge for the future of cloud computing and HPC. Alternatively, clouds and HPC provide solutions for green computing and climate change. In this chapter, the authors discuss this proposition by looking at the technology in detail.


Sign in / Sign up

Export Citation Format

Share Document