scholarly journals A run control framework to streamline profiling, porting, and tuning simulation runs and provenance tracking of geoscientific applications

2018 ◽  
Vol 11 (7) ◽  
pp. 2875-2895
Author(s):  
Wendy Sharples ◽  
Ilya Zhukov ◽  
Markus Geimer ◽  
Klaus Goergen ◽  
Sebastian Luehrs ◽  
...  

Abstract. Geoscientific modeling is constantly evolving, with next-generation geoscientific models and applications placing large demands on high-performance computing (HPC) resources. These demands are being met by new developments in HPC architectures, software libraries, and infrastructures. In addition to the challenge of new massively parallel HPC systems, reproducibility of simulation and analysis results is of great concern. This is due to the fact that next-generation geoscientific models are based on complex model implementations and profiling, modeling, and data processing workflows. Thus, in order to reduce both the duration and the cost of code migration, aid in the development of new models or model components, while ensuring reproducibility and sustainability over the complete data life cycle, an automated approach to profiling, porting, and provenance tracking is necessary. We propose a run control framework (RCF) integrated with a workflow engine as a best practice approach to automate profiling, porting, provenance tracking, and simulation runs. Our RCF encompasses all stages of the modeling chain: (1) preprocess input, (2) compilation of code (including code instrumentation with performance analysis tools), (3) simulation run, and (4) postprocessing and analysis, to address these issues. Within this RCF, the workflow engine is used to create and manage benchmark or simulation parameter combinations and performs the documentation and data organization for reproducibility. In this study, we outline this approach and highlight the subsequent developments scheduled for implementation born out of the extensive profiling of ParFlow. We show that in using our run control framework, testing, benchmarking, profiling, and running models is less time consuming and more robust than running geoscientific applications in an ad hoc fashion, resulting in more efficient use of HPC resources, more strategic code development, and enhanced data integrity and reproducibility.

2017 ◽  
Author(s):  
Wendy Sharples ◽  
Ilya Zhukov ◽  
Markus Geimer ◽  
Klaus Goergen ◽  
Stefan Kollet ◽  
...  

Abstract. Geoscientific modeling is constantly evolving, with next generation geoscientific models and applications placing high demands on high performance computing (HPC) resources. These demands are being met by new developments in HPC architectures, software libraries, and infrastructures. New HPC developments require new programming paradigms leading to substantial investment in model porting, tuning, and refactoring of complicated legacy code in order to use these resources effectively. In addition to the challenge of new massively parallel HPC systems, reproducibility of simulation and analysis results is of great concern, as the next generation geoscientific models are based on complex model implementations and profiling, modeling and data processing workflows. Thus, in order to reduce both the duration and the cost of code migration, aid in the development of new models or model components, while ensuring reproducibility and sustainability over the complete data life cycle, a streamlined approach to profiling, porting, and provenance tracking is necessary.We propose a run control framework (RCF) integrated with a workflow engine which encompasses all stages of the modeling chain: 1. preprocess input, 2. compilation of code (including code instrumentation with performance analysis tools), 3. simulation run, 4. postprocess and analysis, to address these issues.Within this RCF, the workflow engine is used to create and manage benchmark or simulation parameter combinations and performs the documentation and data organization for reproducibility. This approach automates the process of porting and tuning, profiling, testing, and running a geoscientific model. We show that in using our run control framework, testing, benchmarking, profiling, and running models is less time consuming and more robust, resulting in more efficient use of HPC resources, more strategic code development, and enhanced data integrity and reproducibility.


2015 ◽  
Author(s):  
Pierre Carrier ◽  
Bill Long ◽  
Richard Walsh ◽  
Jef Dawson ◽  
Carlos P. Sosa ◽  
...  

High Performance Computing (HPC) Best Practice offers opportunities to implement lessons learned in areas such as computational chemistry and physics in genomics workflows, specifically Next-Generation Sequencing (NGS) workflows. In this study we will briefly describe how distributed-memory parallelism can be an important enhancement to the performance and resource utilization of NGS workflows. We will illustrate this point by showing results on the parallelization of the Inchworm module of the Trinity RNA-Seq pipeline for de novo transcriptome assembly. We show that these types of applications can scale to thousands of cores. Time scaling as well as memory scaling will be discussed at length using two RNA-Seq datasets, targeting the Mus musculus (mouse) and the Axolotl (Mexican salamander). Details about the efficient MPI communication and the impact on performance will also be shown. We hope to demonstrate that this type of parallelization approach can be extended to most types of bioinformatics workflows, with substantial benefits. The efficient, distributed-memory parallel implementation eliminates memory bottlenecks and dramatically accelerates NGS analysis. We further include a summary of programming paradigms available to the bioinformatics community, such as C++/MPI.


2016 ◽  
Vol 45 (6) ◽  
pp. 1240-1258 ◽  
Author(s):  
Ashlea Kellner ◽  
Keith Townsend ◽  
Adrian Wilkinson ◽  
David Greenfield ◽  
Sandra Lawrence

Purpose The purpose of this paper is to develop understanding of the “HRM process” as defined by Bowen and Ostroff (2004). The authors clarify the construct of “HRM philosophy” and suggest it is communicated to employees through “HRM messages”. Interrelationships between these concepts and other elements of the HRM-performance relationship are explored. The study identifies commonalities in the HRM philosophy and messages underscoring high-performing HRM systems, and highlights the function of a “messenger” in delivering messages to staff. Design/methodology/approach Case study of eight Australian hospitals with top performing HRM systems. Combines primary interview data with independent healthcare accreditor reports. Findings All cases share an HRM philosophy of achieving high-performance outcomes through the HRM system and employees are provided with messages about continuous improvement, best practice and innovation. The philosophy was instilled primarily by executive-level managers, whereby distinctiveness, consensus and consistency of communications were important characteristics. Research limitations/implications The research is limited by: omission of low or average performers; a single industry and country design; and exclusion of employee perspectives. Practical implications The findings reinforce the importance of identifying the HRM philosophy and its key communicators within the organisation, and ensuring it is aligned with strategy, climate and the HRM system, particularly during periods of organisational change. Originality/value The authors expand Bowen and Ostroff’s seminal work and develop the concepts of HRM philosophy and messages, offering the model to clarify key relationships. The findings underscore problems associated with a best practice approach that disregards HRM process elements essential for optimising performance.


Author(s):  
Sarah Richmond ◽  
Chantal Huijbers

Recent technologies have enabled consistent and continuous collection of ecological data at high resolutions across large spatial scales. The challenge remains, however, to bring these data together and expose them to methods and tools to analyse the interaction between biodiversity and the environment. These challenges are mostly associated with the accessibility, visibility and interoperability of data, and the technical computation needed to interpret the data. Australia has invested in digital research infrastructures through the National Collaborative Research Infrastructure Strategy (NCRIS). Here we present two platforms that provide easy access to global biodiversity, climate and environmental datasets integrated with a suite of analytical tools and linked to high-performance cloud computing infrastructure. The Biodiversity and Climate Change Virtual Laboratory (BCCVL) is a point-and-click online platform for modelling species responses to environmental conditions, which provides an easy introduction into the scientific concepts of models without the need for the user to understand the underlying code. For ecologists who write their own modelling scripts, we have developed ecocloud: a new online environment that provides access to data connected with command-line analysis tools like RStudio and Jupyter Notebooks as well as a virtual desktop environment using Australia’s national cloud computing infrastructure. ecocloud is built through collaborations among key facilities within the ecosciences domain, establishing a collective long-term vision of creating an ecosystem of infrastructure that provides the capability to enable reliable prediction of future environmental outcomes. Underpinning these tools is an innovative training program, ecoEd, which provides cohesive training and skill development to enhance the translation of Australia’s digital research infrastructures to the ecoscience community by educating and upskilling the next generation of environmental scientists and managers. Both of these platforms are built using a best-practice microservice model that allows for complete flexibility, scalability and stability in a cloud environment. Both the BCCVL and ecocloud are open-source developments and provide opportunities for interoperability with other platforms (e.g. Atlas of Living Austalia). In Australia, the same technical infrastructure is also used for a platform for the humanities and social science domain, indicating that the underlying technologies are not domain specific. We therefore welcome collaborations with other organisations to further develop these platforms for the wider bio- and ecoinformatics community. This presentation will showcase the tools, services, and underpinning infrastructure alongside our training and engagement framework as an exemplar in building platforms for next generation biodiversity science.


No other talent process has been the subject of such great debate and emotion as performance management (PM). For decades, different strategies have been tried to improve PM processes, yielding an endless cycle of reform to capture the next “flavor-of-the-day” PM trend. The past 5 years, however, have brought novel thinking that is different from past trends. Companies are reducing their formal processes, driving performance-based cultures, and embedding effective PM behavior into daily work rather than relying on annual reviews to drive these. Through case studies provided from leading organizations, this book illustrates the range of PM processes that companies are using today. These show a shift away from adopting someone else’s best practice; instead, companies are designing bespoke PM processes that fit their specific strategy, climate, and needs. Leading PM thought leaders offer their views about the state of PM today, what we have learned and where we need to focus future efforts, including provocative new research that shows what matters most in driving high performance. This book is a call to action for talent management professionals to go beyond traditional best practice and provide thought leadership in designing PM processes and systems that will enhance both individual and organizational performance.


Soft Matter ◽  
2021 ◽  
Author(s):  
Yang Yu ◽  
Fengjin Xie ◽  
Xinpei Gao ◽  
Liqiang Zheng

The next generation of high-performance flexible electronics has put forward new demands to the development of ionic conductive hydrogels. In recent years, many efforts have been made toward developing double-network...


Author(s):  
Chenhui WANG ◽  
Nobuyuki Sakai ◽  
Yasuo Ebina ◽  
Takayuki KIKUCHI ◽  
Monika Snowdon ◽  
...  

Lithium-sulfur batteries have high promise for application in next-generation energy storage. However, further advances have been hindered by various intractable challenges, particularly three notorious problems: the “shuttle effect”, sluggish kinetics...


Crystals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 229
Author(s):  
Roberto Bergamaschini ◽  
Elisa Vitiello

The quest for high-performance and scalable devices required for next-generation semiconductor applications inevitably passes through the fabrication of high-quality materials and complex designs [...]


Sign in / Sign up

Export Citation Format

Share Document