Usage of the Open Science Grid

Author(s):  
Brian Bockelman
Author(s):  
Christina Koch ◽  
Carrie Brown ◽  
Mats Rynge ◽  
Emelie Fuchs ◽  
Lauren Michael

2012 ◽  
pp. 862-880
Author(s):  
Russ Miller ◽  
Charles Weeks

Grids represent an emerging technology that allows geographically- and organizationally-distributed resources (e.g., computer systems, data repositories, sensors, imaging systems, and so forth) to be linked in a fashion that is transparent to the user. The New York State Grid (NYS Grid) is an integrated computational and data grid that provides access to a wide variety of resources to users from around the world. NYS Grid can be accessed via a Web portal, where the users have access to their data sets and applications, but do not need to be made aware of the details of the data storage or computational devices that are specifically employed in solving their problems. Grid-enabled versions of the SnB and BnP programs, which implement the Shake-and-Bake method of molecular structure (SnB) and substructure (BnP) determination, respectively, have been deployed on NYS Grid. Further, through the Grid Portal, SnB has been run simultaneously on all computational resources on NYS Grid as well as on more than 1100 of the over 3000 processors available through the Open Science Grid.


2012 ◽  
Vol 396 (5) ◽  
pp. 052062
Author(s):  
Alain Roy

2008 ◽  
Vol 119 (6) ◽  
pp. 062001 ◽  
Author(s):  
B Abbott ◽  
A Baranovski ◽  
M Diesburg ◽  
G Garzoglio ◽  
T Kurca ◽  
...  

2010 ◽  
Vol 219 (6) ◽  
pp. 062024 ◽  
Author(s):  
R Pordes ◽  
the Open Science Grid Executive Board ◽  
J Weichel

2014 ◽  
Vol 513 (3) ◽  
pp. 032057 ◽  
Author(s):  
T Levshina ◽  
A Guru

2013 ◽  
Vol 15 (4) ◽  
pp. 20-29 ◽  
Author(s):  
Gideon Juve ◽  
Mats Rynge ◽  
Ewa Deelman ◽  
Jens-S. Vockler ◽  
G. Bruce Berriman

First Monday ◽  
2007 ◽  
Author(s):  
Paul Avery

I describe in this paper the creation and operation of the Open Science Grid (OSG [1]), a distributed shared cyberinfrastructure driven by the milestones of a diverse group of research communities. The effort is fundamentally collaborative, with domain scientists, computer scientists and technology specialists and providers from more than 70 U.S. universities, national laboratories and organizations providing resources, tools and expertise. The evolving OSG facility provides computing and storage resources for particle and nuclear physics, gravitational wave experiments, digital astronomy, molecular genomics, nanoscience and applied mathematics. The OSG consortium also partners with campus and regional grids, large projects such as TeraGrid [2], Earth System Grid [3], Enabling Grids for E–sciencE (EGEE [4]) in Europe and related efforts in South America and Asia to facilitate interoperability across national and international boundaries. OSG’s experience broadly illustrates the breadth and scale of effort that a diverse, evolving collaboration must undertake in building and sustaining large–scale cyberinfrastructure serving multiple communities. Scalability — in resource size, number of member organizations and application diversity — remains a central concern. As a result, many interesting [5] challenges continue to emerge and their resolution requires engaged partners and creative adjustments.


2007 ◽  
Vol 15 (4) ◽  
pp. 249-268 ◽  
Author(s):  
Gurmeet Singh ◽  
Karan Vahi ◽  
Arun Ramakrishnan ◽  
Gaurang Mehta ◽  
Ewa Deelman ◽  
...  

In this paper we examine the issue of optimizing disk usage and scheduling large-scale scientific workflows onto distributed resources where the workflows are data-intensive, requiring large amounts of data storage, and the resources have limited storage resources. Our approach is two-fold: we minimize the amount of space a workflow requires during execution by removing data files at runtime when they are no longer needed and we demonstrate that workflows may have to be restructured to reduce the overall data footprint of the workflow. We show the results of our data management and workflow restructuring solutions using a Laser Interferometer Gravitational-Wave Observatory (LIGO) application and an astronomy application, Montage, running on a large-scale production grid-the Open Science Grid. We show that although reducing the data footprint of Montage by 48% can be achieved with dynamic data cleanup techniques, LIGO Scientific Collaboration workflows require additional restructuring to achieve a 56% reduction in data space usage. We also examine the cost of the workflow restructuring in terms of the application's runtime.


Sign in / Sign up

Export Citation Format

Share Document