scholarly journals Swirls: A Platform for Enabling Multicluster and Multicloud Execution of Parallel Programs

2021 ◽  
Author(s):  
Francisco Heron de Carvalho Junior ◽  
Allberson Bruno de Oliveira Dantas ◽  
Claro Henrique Silva Sales

Swirls is a general purpose application for interactive building, deploying, and execution of message-passing parallel programs that address multicluster and multicloud requirements. It is implemented on HPC Shelf, a cloud-based platform for providing HPC services. Swirls enables the communication between MPI programs written in C#, C, C++, and Python across one or more clusters, either on-premise or cloud-based ones. At the current implementation status, The users of Swirls may use clusters formed by virtual machines over Amazon Elastic Compute Cloud (EC2) and Google Cloud Platform (GCP).

Author(s):  
Yuancheng Li ◽  
Pan Zhang ◽  
Daoxing Li ◽  
Jing Zeng

Background: Cloud platform is widely used in electric power field. Virtual machine co-resident attack is one of the major security threats to the existing power cloud platform. Objective: This paper proposes a mechanism to defend virtual machine co-resident attack on power cloud platform. Method: Our defense mechanism uses the DBSCAN algorithm to classify and output the classification results through the random forest and uses improved virtual machine deployment strategy which combines the advantages of random round robin strategy and maximum/minimum resource strategy to deploy virtual machines. Results: we made a simulation experiment on power cloud platform of State Grid and verified the effectiveness of proposed defense deployment strategy. Conclusion: After the virtual machine deployment strategy is improved, the coverage of the virtual machine is remarkably reduced which proves that our defense mechanism achieves some effect of defending the virtual machine from virtual machine co-resident attack.


2015 ◽  
Vol 50 (10) ◽  
pp. 280-298 ◽  
Author(s):  
Hugo A. López ◽  
Eduardo R. B. Marques ◽  
Francisco Martins ◽  
Nicholas Ng ◽  
César Santos ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3807 ◽  
Author(s):  
Haonan Sun ◽  
Rongyu He ◽  
Yong Zhang ◽  
Ruiyun Wang ◽  
Wai Hung Ip ◽  
...  

Today cloud computing is widely used in various industries. While benefiting from the services provided by the cloud, users are also faced with some security issues, such as information leakage and data tampering. Utilizing trusted computing technology to enhance the security mechanism, defined as trusted cloud, has become a hot research topic in cloud security. Currently, virtual TPM (vTPM) is commonly used in a trusted cloud to protect the integrity of the cloud environment. However, the existing vTPM scheme lacks protections of vTPM itself at a runtime environment. This paper proposed a novel scheme, which designed a new trusted cloud platform security component, ‘enclave TPM (eTPM)’ to protect cloud and employed Intel SGX to enhance the security of eTPM. The eTPM is a software component that emulates TPM functions which build trust and security in cloud and runs in ‘enclave’, an isolation memory zone introduced by SGX. eTPM can ensure its security at runtime, and protect the integrity of Virtual Machines (VM) according to user-specific policies. Finally, a prototype for the eTPM scheme was implemented, and experiment manifested its effectiveness, security, and availability.


Author(s):  
Archana Singh ◽  
Rakesh Kumar

Load balancing is the phenomenon of distributing workload over various computing resources efficiently. It offers enterprises to efficiently manage different application or workload demands by allocating available resources among different servers, computers, and networks. These services can be accessed and utilized either for home use or for business purposes. Due to the excessive load on the cloud, sometimes it is not feasible to offer all these services to different users efficiently. To solve this excessive load issue, an efficient load balancing technique is used to offer satisfactory services to users as per their expectations also leading to efficient utilization of resources and applications on the cloud platform. This paper presents an enhanced load balancing algorithm named as a two-phase load balancing algorithm. It uses a two-phase checking load balancing approach where the first phase is to divide all virtual machines into two different tables based on their state, that is, available or busy while in the second phase, it equally distributes the loads. The various parameters used to measure the performance of the proposed algorithm are cost, data center processing time, and response time. Cloud analyst simulation tool is used to simulate the algorithm. Simulation results demonstrate superiority of the algorithm with existing ones.


2018 ◽  
Vol 11 (2) ◽  
pp. 88-109
Author(s):  
Devki Nandan Jha ◽  
Deo Prakash Vidyarthi

Cloud computing is a technological advancement that provides services in the form of utility on a pay-per-use basis. As the cloud market is expanding, numerous service providers are joining the cloud platform with their services. This creates an indecision amongst the users to choose an appropriate service provider especially when the cloud provider provisions diverse type of virtual machines. The problem becomes more challenging when the user has different jobs requiring specific quality of service. To address the aforementioned problem, this article applies a hybrid heuristic using College Admission Problem and Analytical Hierarchical Process for stable matching of the users' job with the cloud's virtual machines. The case study depicts the effectiveness of the proposed model.


2016 ◽  
Vol 31 (6) ◽  
pp. 1985-1996 ◽  
Author(s):  
David Siuta ◽  
Gregory West ◽  
Henryk Modzelewski ◽  
Roland Schigas ◽  
Roland Stull

Abstract As cloud-service providers like Google, Amazon, and Microsoft decrease costs and increase performance, numerical weather prediction (NWP) in the cloud will become a reality not only for research use but for real-time use as well. The performance of the Weather Research and Forecasting (WRF) Model on the Google Cloud Platform is tested and configurations and optimizations of virtual machines that meet two main requirements of real-time NWP are found: 1) fast forecast completion (timeliness) and 2) economic cost effectiveness when compared with traditional on-premise high-performance computing hardware. Optimum performance was found by using the Intel compiler collection with no more than eight virtual CPUs per virtual machine. Using these configurations, real-time NWP on the Google Cloud Platform is found to be economically competitive when compared with the purchase of local high-performance computing hardware for NWP needs. Cloud-computing services are becoming viable alternatives to on-premise compute clusters for some applications.


Author(s):  
Masaki Iwasawa ◽  
Daisuke Namekata ◽  
Keigo Nitadori ◽  
Kentaro Nomura ◽  
Long Wang ◽  
...  

Abstract We describe algorithms implemented in FDPS (Framework for Developing Particle Simulators) to make efficient use of accelerator hardware such as GPGPUs (general-purpose computing on graphics processing units). We have developed FDPS to make it possible for researchers to develop their own high-performance parallel particle-based simulation programs without spending large amounts of time on parallelization and performance tuning. FDPS provides a high-performance implementation of parallel algorithms for particle-based simulations in a “generic” form, so that researchers can define their own particle data structure and interparticle interaction functions. FDPS compiled with user-supplied data types and interaction functions provides all the necessary functions for parallelization, and researchers can thus write their programs as though they are writing simple non-parallel code. It has previously been possible to use accelerators with FDPS by writing an interaction function that uses the accelerator. However, the efficiency was limited by the latency and bandwidth of communication between the CPU and the accelerator, and also by the mismatch between the available degree of parallelism of the interaction function and that of the hardware parallelism. We have modified the interface of the user-provided interaction functions so that accelerators are more efficiently used. We also implemented new techniques which reduce the amount of work on the CPU side and the amount of communication between CPU and accelerators. We have measured the performance of N-body simulations on a system with an NVIDIA Volta GPGPU using FDPS and the achieved performance is around 27% of the theoretical peak limit. We have constructed a detailed performance model, and found that the current implementation can achieve good performance on systems with much smaller memory and communication bandwidth. Thus, our implementation will be applicable to future generations of accelerator system.


Sign in / Sign up

Export Citation Format

Share Document