scholarly journals Calibration approaches for distributed hydrologic models using high performance computing: implication for streamflow projections under climate change

2014 ◽  
Vol 11 (9) ◽  
pp. 10273-10317 ◽  
Author(s):  
S. Wi ◽  
Y. C. E. Yang ◽  
S. Steinschneider ◽  
A. Khalil ◽  
C. M. Brown

Abstract. This study utilizes high performance computing to test the performance and uncertainty of calibration strategies for a spatially distributed hydrologic model in order to improve model simulation accuracy and understand prediction uncertainty at interior ungaged sites of a sparsely-gaged watershed. The study is conducted using a distributed version of the HYMOD hydrologic model (HYMOD_DS) applied to the Kabul River basin. Several calibration experiments are conducted to understand the benefits and costs associated with different calibration choices, including (1) whether multisite gaged data should be used simultaneously or in a step-wise manner during model fitting, (2) the effects of increasing parameter complexity, and (3) the potential to estimate interior watershed flows using only gaged data at the basin outlet. The implications of the different calibration strategies are considered in the context of hydrologic projections under climate change. Several interesting results emerge from the study. The simultaneous use of multisite data is shown to improve the calibration over a step-wise approach, and both multisite approaches far exceed a calibration based on only the basin outlet. The basin outlet calibration can lead to projections of mid-21st century streamflow that deviate substantially from projections under multisite calibration strategies, supporting the use of caution when using distributed models in data-scarce regions for climate change impact assessments. Surprisingly, increased parameter complexity does not substantially increase the uncertainty in streamflow projections, even though parameter equifinality does emerge. The results suggest that increased (excessive) parameter complexity does not always lead to increased predictive uncertainty if structural uncertainties are present. The largest uncertainty in future streamflow results from variations in projected climate between climate models, which substantially outweighs the calibration uncertainty.

2015 ◽  
Vol 19 (2) ◽  
pp. 857-876 ◽  
Author(s):  
S. Wi ◽  
Y. C. E. Yang ◽  
S. Steinschneider ◽  
A. Khalil ◽  
C. M. Brown

Abstract. This study tests the performance and uncertainty of calibration strategies for a spatially distributed hydrologic model in order to improve model simulation accuracy and understand prediction uncertainty at interior ungaged sites of a sparsely gaged watershed. The study is conducted using a distributed version of the HYMOD hydrologic model (HYMOD_DS) applied to the Kabul River basin. Several calibration experiments are conducted to understand the benefits and costs associated with different calibration choices, including (1) whether multisite gaged data should be used simultaneously or in a stepwise manner during model fitting, (2) the effects of increasing parameter complexity, and (3) the potential to estimate interior watershed flows using only gaged data at the basin outlet. The implications of the different calibration strategies are considered in the context of hydrologic projections under climate change. To address the research questions, high-performance computing is utilized to manage the computational burden that results from high-dimensional optimization problems. Several interesting results emerge from the study. The simultaneous use of multisite data is shown to improve the calibration over a stepwise approach, and both multisite approaches far exceed a calibration based on only the basin outlet. The basin outlet calibration can lead to projections of mid-21st century streamflow that deviate substantially from projections under multisite calibration strategies, supporting the use of caution when using distributed models in data-scarce regions for climate change impact assessments. Surprisingly, increased parameter complexity does not substantially increase the uncertainty in streamflow projections, even though parameter equifinality does emerge. The results suggest that increased (excessive) parameter complexity does not always lead to increased predictive uncertainty if structural uncertainties are present. The largest uncertainty in future streamflow results from variations in projected climate between climate models, which substantially outweighs the calibration uncertainty.


Green computing is a contemporary research topic to address climate and energy challenges. In this chapter, the authors envision the duality of green computing with technological trends in other fields of computing such as High Performance Computing (HPC) and cloud computing on one hand and economy and business on the other hand. For instance, in order to provide electricity for large-scale cloud infrastructures and to reach exascale computing, we need huge amounts of energy. Thus, green computing is a challenge for the future of cloud computing and HPC. Alternatively, clouds and HPC provide solutions for green computing and climate change. In this chapter, the authors discuss this proposition by looking at the technology in detail.


2013 ◽  
Vol 98 ◽  
pp. 131-135 ◽  
Author(s):  
Jean-André Vital ◽  
Michael Gaurut ◽  
Romain Lardy ◽  
Nicolas Viovy ◽  
Jean-François Soussana ◽  
...  

2020 ◽  
Author(s):  
Maria Moreno de Castro ◽  
Stephan Kindermann ◽  
Sandro Fiore ◽  
Paola Nassisi ◽  
Guillaume Levavasseur ◽  
...  

<p>Earth System observational and model data volumes are constantly increasing and it can be challenging to discover, download, and analyze data if scientists do not have the required computing and storage resources at hand. This is especially the case for detection and attribution studies in the field of climate change research since we need to perform multi-source and cross-disciplinary comparisons for datasets of high-spatial and large temporal coverage. Researchers and end-users are therefore looking for access to cloud solutions and high performance compute facilities. The Earth System Grid Federation (ESGF, https://esgf.llnl.gov/) maintains a global system of federated data centers that allow access to the largest archive of model climate data world-wide. ESGF portals provide free access to the output of the data contributing to the next assessment report of the Intergovernmental Panel on Climate Change through the Coupled Model Intercomparison Project. In order to support users to directly access to high performance computing facilities to perform analyses such as detection and attribution of climate change and its impacts, the EU Commission funded a new service within the infrastructure of the European Network for Earth System Modelling (ENES, https://portal.enes.org/data/data-metadata-service/analysis-platforms). This new service is designed to reduce data transfer issues, speed up the computational analysis, provide storage, and ensure the resources access and maintenance. Furthermore, the service is free of charge, only requires a lightweight application. We will present a demo on how flexible it is to calculate climate indices from different ESGF datasets covering a wide range of temporal and spatial scales using cdo (Climate Data Operators, https://code.mpimet.mpg.de/projects/cdo/) and Jupyter notebooks running directly on the ENES partners: the DKRZ (Germany), JASMIN (UK), CMCC(Italy), and IPSL (France) high performance computing centers.</p>


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document