Flood Prediction Model Simulation With Heterogeneous Trade-Offs In High Performance Computing Framework

Author(s):  
Antonio Portero ◽  
Radim Vavrik ◽  
Stepan Kuchar ◽  
Martin Golasowski ◽  
Vit Vondrak ◽  
...  
2021 ◽  
Author(s):  
Mohsen Hadianpour ◽  
Ehsan Rezayat ◽  
Mohammad-Reza Dehaqani

Abstract Due to the significantly drastic progress and improvement in neurophysiological recording technologies, neuroscientists have faced various complexities dealing with unstructured large-scale neural data. In the neuroscience community, these complexities could create serious bottlenecks in storing, sharing, and processing neural datasets. In this article, we developed a distributed high-performance computing (HPC) framework called `Big neuronal data framework' (BNDF), to overcome these complexities. BNDF is based on open-source big data frameworks, Hadoop and Spark providing a flexible and scalable structure. We examined BNDF on three different large-scale electrophysiological recording datasets from nonhuman primate’s brains. Our results exhibited faster runtimes with scalability due to the distributed nature of BNDF. We compared BNDF results to a widely used platform like MATLAB in an equitable computational resource. Compared with other similar methods, using BNDF provides more than five times faster performance in spike sorting as a usual neuroscience application.


2016 ◽  
Vol 141 ◽  
pp. 22-30 ◽  
Author(s):  
Shuangshuang Jin ◽  
Yousu Chen ◽  
Ruisheng Diao ◽  
Zhenyu (Henry) Huang ◽  
William Perkins ◽  
...  

2014 ◽  
Vol 11 (9) ◽  
pp. 10273-10317 ◽  
Author(s):  
S. Wi ◽  
Y. C. E. Yang ◽  
S. Steinschneider ◽  
A. Khalil ◽  
C. M. Brown

Abstract. This study utilizes high performance computing to test the performance and uncertainty of calibration strategies for a spatially distributed hydrologic model in order to improve model simulation accuracy and understand prediction uncertainty at interior ungaged sites of a sparsely-gaged watershed. The study is conducted using a distributed version of the HYMOD hydrologic model (HYMOD_DS) applied to the Kabul River basin. Several calibration experiments are conducted to understand the benefits and costs associated with different calibration choices, including (1) whether multisite gaged data should be used simultaneously or in a step-wise manner during model fitting, (2) the effects of increasing parameter complexity, and (3) the potential to estimate interior watershed flows using only gaged data at the basin outlet. The implications of the different calibration strategies are considered in the context of hydrologic projections under climate change. Several interesting results emerge from the study. The simultaneous use of multisite data is shown to improve the calibration over a step-wise approach, and both multisite approaches far exceed a calibration based on only the basin outlet. The basin outlet calibration can lead to projections of mid-21st century streamflow that deviate substantially from projections under multisite calibration strategies, supporting the use of caution when using distributed models in data-scarce regions for climate change impact assessments. Surprisingly, increased parameter complexity does not substantially increase the uncertainty in streamflow projections, even though parameter equifinality does emerge. The results suggest that increased (excessive) parameter complexity does not always lead to increased predictive uncertainty if structural uncertainties are present. The largest uncertainty in future streamflow results from variations in projected climate between climate models, which substantially outweighs the calibration uncertainty.


2020 ◽  
Vol 245 ◽  
pp. 07006
Author(s):  
Cécile Cavet ◽  
Martin Souchal ◽  
Sébastien Gadrat ◽  
Gilles Grasseau ◽  
Andrea Satirana ◽  
...  

The High Performance Computing (HPC) domain aims to optimize code in order to use the latest multicore and parallel technologies including specific processor instructions. In this computing framework, portability and reproducibility are key concepts. A way to handle these requirements is to use Linux containers. These “light virtual machines” allow to encapsulate applications within its environment in Linux processes. Containers have been recently rediscovered due to their abilities to provide both multi-infrastructure environnement for developers and system administrators and reproducibility due to image building file. Two container solutions are emerging: Docker for microservices and Singularity for computing applications. We present here the status of the ComputeOps project which has the goal to study the benefit of containers for HPC applications.


To build the productivity of every errand, we necessitate a framework that should furnish high performance alongside adaptabilities and price effectiveness for client. Cloud computing, since we are for the most part mindful, has turned out to be well known over the previous decade. So as to build up a high performance disseminated framework, we have to use the cloud computing. In this paper, we will initially have a presentation of high performance computing framework. Thusly inspecting them we will investigate inclines in compute and emerald feasible computing to upgrade the routine of a cloud framework. At long last introducing the future degree, we finish up the paper recommending a way to accomplish a emerald high performance cloud framework.


2016 ◽  
Vol 141 ◽  
pp. 372-380 ◽  
Author(s):  
Cosmin G. Petra ◽  
Victor M. Zavala ◽  
Elias D. Nino-Ruiz ◽  
Mihai Anitescu

Sign in / Sign up

Export Citation Format

Share Document