scholarly journals Efficient large-scale replica-exchange simulations on production infrastructure

Author(s):  
Abhinav Thota ◽  
André Luckow ◽  
Shantenu Jha

Replica-exchange (RE) algorithms are used to understand physical phenomena—ranging from protein folding dynamics to binding affinity calculations. They represent a class of algorithms that involve a large number of loosely coupled ensembles, and are thus amenable to using distributed resources. We develop a framework for RE that supports different replica pairing (synchronous versus asynchronous) and exchange coordination mechanisms (centralized versus decentralized) and which can use a range of production cyberinfrastructures concurrently. We characterize the performance of both RE algorithms at an unprecedented number of cores employed—the number of replicas and the typical number of cores per replica—on the production distributed infrastructure. We find that the asynchronous algorithms outperform the synchronous algorithms, even though details of the specific implementations are important determinants of performance.

2010 ◽  
Vol 18 (2) ◽  
pp. 77-92 ◽  
Author(s):  
Gideon Juve ◽  
Ewa Deelman ◽  
Karan Vahi ◽  
Gaurang Mehta

The development of grid and workflow technologies has enabled complex, loosely coupled scientific applications to be executed on distributed resources. Many of these applications consist of large numbers of short-duration tasks whose runtimes are heavily influenced by delays in the execution environment. Such applications often perform poorly on the grid because of the large scheduling overheads commonly found in grids. In this paper we present a provisioning system based on multi-level scheduling that improves workflow runtime by reducing scheduling overheads. The system reserves resources for the exclusive use of the application, and gives applications control over scheduling policies. We describe our experiences with the system when running a suite of real workflow-based applications including in astronomy, earthquake science, and genomics. Provisioning resources with Corral ahead of the workflow execution, reduced the runtime of the astronomy application by up to 78% (45% on average) and of a genome mapping application by an order of magnitude when compared to traditional methods. We also show how provisioning can benefit applications both on a small local cluster as well as a large-scale campus resource.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-33
Author(s):  
Blesson Varghese ◽  
Nan Wang ◽  
David Bermbach ◽  
Cheol-Ho Hong ◽  
Eyal De Lara ◽  
...  

Edge computing is the next Internet frontier that will leverage computing resources located near users, sensors, and data stores to provide more responsive services. Therefore, it is envisioned that a large-scale, geographically dispersed, and resource-rich distributed system will emerge and play a key role in the future Internet. However, given the loosely coupled nature of such complex systems, their operational conditions are expected to change significantly over time. In this context, the performance characteristics of such systems will need to be captured rapidly, which is referred to as performance benchmarking, for application deployment, resource orchestration, and adaptive decision-making. Edge performance benchmarking is a nascent research avenue that has started gaining momentum over the past five years. This article first reviews articles published over the past three decades to trace the history of performance benchmarking from tightly coupled to loosely coupled systems. It then systematically classifies previous research to identify the system under test, techniques analyzed, and benchmark runtime in edge performance benchmarking.


2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

E-Governance is getting momentous in India. Over the years, e-Governance has played a major part in every sphere of the economy. In this paper, we have proposed E-MODI (E-governance model for open distributed infrastructure) a centralized e-Governance system for government of India, the implementation of this system is technically based on open distributed infrastructure which comprises of various government bodies in one single centralized unit. Our proposed model identifies three different patterns of cloud computing which are DGC, SGC and CGC. In addition, readiness assessment of the services needs to migrate into cloud. In this paper, we propose energy efficient VM allocation algorithm to achieve higher energy efficiency in large scale cloud data centers when system on optimum mode. Our objectives have been explained in details and experiments were designed to demonstrate the robustness of the multi-layered security which is an integration of High secure lightweight block cipher CSL along with Ultra powerful BLAKE3 hashing function in order to maintain information security triad.


Author(s):  
Makoto Yoshida ◽  
Kazumine Kojima

Large scale loosely coupled PCs can organize clusters and form desktop computing grids on sharing each processing power; power of PCs, transaction distributions, network scales, network delays, and code migration algorithms characterize the performance of the computing grids. This article describes the design methodologies of workload management in distributed desktop computing grids. Based on the code migration experiments, transfer policy for computation was determined and several simulations for location policies were examined, and the design methodologies for distributed desktop computing grids are derived from the simulation results. The language for distributed desktop computing is designed to accomplish the design methodologies.


Author(s):  
Ghalem Belalem ◽  
Naima Belayachi ◽  
Radjaa Behidji ◽  
Belabbes Yagoubi

Data grids are current solutions to the needs of large scale systems and provide a set of different geographically distributed resources. Their goal is to offer an important capacity of parallel calculation, ensure a data effective and rapid access, improve the availability, and tolerate the breakdowns. In such systems, however, these advantages are possible only by using the replication technique. The use of this technique raises the problem of maintaining consistency of replicas of the same data set. In order to guarantee replica set reliability, it is necessary to have high coherence. This fact, however, penalizes performance. In this paper, the authors propose studying balancing influence on replica quality. For this reason, a service of hybrid consistency management is developed, which combines the pessimistic and optimistic approaches and is extended by a load balancing service to improve service quality. This service is articulated on a hierarchical model with two levels.


Big Data ◽  
2016 ◽  
pp. 1555-1581
Author(s):  
Gueyoung Jung ◽  
Tridib Mukherjee

In the modern information era, the amount of data has exploded. Current trends further indicate exponential growth of data in the future. This prevalent humungous amount of data—referred to as big data—has given rise to the problem of finding the “needle in the haystack” (i.e., extracting meaningful information from big data). Many researchers and practitioners are focusing on big data analytics to address the problem. One of the major issues in this regard is the computation requirement of big data analytics. In recent years, the proliferation of many loosely coupled distributed computing infrastructures (e.g., modern public, private, and hybrid clouds, high performance computing clusters, and grids) have enabled high computing capability to be offered for large-scale computation. This has allowed the execution of the big data analytics to gather pace in recent years across organizations and enterprises. However, even with the high computing capability, it is a big challenge to efficiently extract valuable information from vast astronomical data. Hence, we require unforeseen scalability of performance to deal with the execution of big data analytics. A big question in this regard is how to maximally leverage the high computing capabilities from the aforementioned loosely coupled distributed infrastructure to ensure fast and accurate execution of big data analytics. In this regard, this chapter focuses on synchronous parallelization of big data analytics over a distributed system environment to optimize performance.


2016 ◽  
Vol 13 (1) ◽  
pp. 1-22 ◽  
Author(s):  
Shuai Zhao ◽  
Bo Cheng ◽  
Le Yu ◽  
Shou-lu Hou ◽  
Yang Zhang ◽  
...  

With the development of Internet of Things (IoT), large-scale of resources and applications atop them emerge. However, most of existing efforts are “silo” solutions, there is a tight-coupling between the device and the application. The paradigm for IoT and its corresponding infrastructure are required to move away from isolated solutions towards cooperative models. Recent works have focused on applying Service Oriented Architecture (SOA) to IoT service provisioning. Other than the traditional services of cyberspace which are oriented to a two-tuple problem domain, IoT services are faced with a three-tuple problem domain of user requirement, cyberspace and physical space. One challenge of existing works is lacking of efficient mechanism to on-demand provisioning the sensing information in a loosely-coupled, decentralized way and then dynamically coordinate the relevant services to rapidly respond to changes in the physical world. Another challenge is how to systematically and effectively access (plug) the heterogeneous devices without intrusive changing. This paper proposes a service provisioning platform which enables to access heterogeneous devices and expose device capabilities as light-weighted service, and presents an event-based message interaction mode to facilitate the asynchronous, on-demand sharing of sensing information in distributed, loosely-coupled IoT environment. It provides the basic infrastructure for IoT application pattern: inner-domain high-degree autonomy and inter-domain dynamic coordination. The practicability of platform is validated by experimental evaluations and a District Heating Control and Information System (DHCIS).


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 852 ◽  
Author(s):  
Sajid Latif ◽  
Syed Mushhad Gilani ◽  
Rana Liaqat Ali ◽  
Misbah Liaqat ◽  
Kwang-Man Ko

The interconnected cloud (Intercloud) federation is an emerging paradigm that revolutionizes the scalable service provision of geographically distributed resources. Large-scale distributed resources require well-coordinated and automated frameworks to facilitate service provision in a seamless and systematic manner. Unquestionably, standalone service providers must communicate and federate their cloud sites with other vendors to enable the infinite pooling of resources. The pooling of these resources provides uninterpretable services to increasingly growing cloud users more efficiently, and ensures an improved Service Level Agreement (SLA). However, the research of Intercloud resource management is in its infancy. Therefore, standard interfaces, protocols, and uniform architectural components need to be developed for seamless interaction among federated clouds. In this study, we propose a distributed meta-brokering-enabled scheduling framework for provision of user application services in the federated cloud environment. Modularized architecture of the proposed system with uniform configuration in participating resource sites orchestrate the critical operations of resource management effectively, and form the federation schema. Overlaid meta-brokering instances are implemented on the top of local resource brokers to keep the global functionality isolated. These instances in overlay topology communicate in a P2P manner to maintain decentralization, high scalability, and load manageability. The proposed framework has been implemented and evaluated by extending the Java-based CloudSim 3.0.3 simulation application programming interfaces (APIs). The presented results validate the proposed model and its efficiency to facilitate user application execution with the desired QoS parameters.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
C. R. S. V. Boas ◽  
B. Focassio ◽  
E. Marinho ◽  
D. G. Larrude ◽  
M. C. Salvadori ◽  
...  

Abstract New techniques to manipulate the electronic properties of few layer 2D materials, unveiling new physical phenomena as well as possibilities for new device applications have brought renewed interest to these systems. Therefore, the quest for reproducible methods for the large scale synthesis, as well as the manipulation, characterization and deeper understanding of these structures is a very active field of research. We here report the production of nitrogen doped bilayer graphene in a fast single step (2.5 minutes), at reduced temperatures (760 °C) using microwave plasma-enhanced chemical vapor deposition (MW-PECVD). Raman spectroscopy confirmed that nitrogen-doped bilayer structures were produced by this method. XPS analysis showed that we achieved control of the concentration of nitrogen dopants incorporated into the final samples. We have performed state of the art parameter-free simulations to investigate the cause of an unexpected splitting of the XPS signal as the concentration of nitrogen defects increased. We show that this splitting is due to the formation of interlayer bonds mediated by nitrogen defects on the layers of the material. The occurrence of these bonds may result in very specific electronic and mechanical properties of the bilayer structures.


Sign in / Sign up

Export Citation Format

Share Document