scholarly journals From offshore to onshore probabilistic tsunami hazard assessment via efficient Monte-Carlo sampling

2021 ◽  
Author(s):  
Gareth Davies ◽  
Rikki Weber ◽  
Kaya Wilson ◽  
Phil Cummins

Offshore Probabilistic Tsunami Hazard Assessments (offshore PTHAs) provide large-scale analyses of earthquake-tsunami frequencies and uncertainties in the deep ocean, but do not provide high-resolution onshore tsunami hazard information as required for many risk-management applications. To understand the implications of an offshore PTHA for the onshore hazard at any site, in principle the tsunami inundation should be simulated locally for every scenario in the offshore PTHA. In practice this is rarely feasible due to the computational expense of inundation models, and the large number of scenarios in offshore PTHAs. Monte-Carlo methods offer a practical and rigorous alternative for approximating the onshore hazard, using a random subset of scenarios. The resulting Monte-Carlo errors can be quantified and controlled, enabling high-resolution onshore PTHAs to be implemented at a fraction of the computational cost. This study develops novel Monte-Carlo sampling approaches for offshore-to-onshore PTHA. Modelled offshore PTHA wave heights are used to preferentially sample scenarios that have large offshore waves near an onshore site of interest. By appropriately weighting the scenarios, the Monte-Carlo errors are reduced without introducing any bias. The techniques are applied to a high-resolution onshore PTHA for the island of Tongatapu in Tonga. In this region, the new approaches lead to efficiency improvements equivalent to using 4-18 times more random scenarios, as compared with stratified-sampling by magnitude, which is commonly used for onshore PTHA. The greatest efficiency improvements are for rare, large tsunamis, and for calculations that represent epistemic uncertainties in the tsunami hazard. To facilitate the control of Monte-Carlo errors in practical applications, this study also provides analytical techniques for estimating the errors both before and after inundation simulations are conducted. Before inundation simulation, this enables a proposed Monte-Carlo sampling scheme to be checked, and potentially improved, at minimal computational cost. After inundation simulation, it enables the remaining Monte-Carlo errors to be quantified at onshore sites, without additional inundation simulations. In combination these techniques enable offshore PTHAs to be rigorously transformed into onshore PTHAs, with full characterisation of epistemic uncertainties, while controlling Monte-Carlo errors.

2016 ◽  
Author(s):  
R. J. Haarsma ◽  
M. Roberts ◽  
P. L. Vidale ◽  
C. A. Senior ◽  
A. Bellucci ◽  
...  

Abstract. Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest the possibility for significant changes in both large-scale aspects of circulation, as well as improvements in small-scale processes and extremes. However, such high resolution global simulations at climate time scales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centers and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other MIPs. Increases in High Performance Computing (HPC) resources, as well as the revised experimental design for CMIP6, now enables a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility to extend to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulation. HighResMIP thereby focuses on one of the CMIP6 broad questions: “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.


2016 ◽  
Vol 9 (11) ◽  
pp. 4185-4208 ◽  
Author(s):  
Reindert J. Haarsma ◽  
Malcolm J. Roberts ◽  
Pier Luigi Vidale ◽  
Catherine A. Senior ◽  
Alessio Bellucci ◽  
...  

Abstract. Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.


2009 ◽  
Vol 48 (10) ◽  
pp. 2069-2085 ◽  
Author(s):  
Cesar Azorin-Molina ◽  
Bernadette H. Connell ◽  
Rafael Baena-Calatrava

Abstract The aim of this study was to identify clear air boundaries and to obtain spatial distribution of convective areas associated with the sea breeze over the Iberian Mediterranean zone and the isle of Mallorca, both in Spain. Daytime Advanced Very High Resolution Radiometer (AVHRR) data from National Oceanic and Atmospheric Administration (NOAA) polar-orbiting satellites were collected for May–October 2004. A cloud detection algorithm was used to identify clouds to derive daytime sea-breeze cloud frequency composites over land. The high-resolution composites aided in identifying the location of five preferential sea-breeze convergence zones (SBCZ) in relation to the shape of coastline and orographic effects. Additionally, eight regimes were designated using mean boundary layer wind speed and direction to provide statistics about the effect of prevailing large-scale flows on sea-breeze convection over the five SBCZ. The offshore SW to W and the NW to N regimes were characterized by high cloud frequencies parallel to the coast. Small differences in mean cloud frequency values from morning to afternoon composites were detected with these regimes because sea-breeze fronts tended to form early and persist into the afternoon. Just the opposite occurred under the onshore NE to E and SE to S regimes. It was found that light to moderate (≤5.1 m s−1) winds aloft result in more clouds at the leading edge of sea breezes. In contrast, strong synoptic-scale (>5.1 m s−1) flows weaken boundary layer convergence. The results from this satellite meteorology study could have practical applications for many people including those that forecast the weather and those that use the forecast for making decisions related to energy use, fishing, recreation, or agriculture activities, as well as for estimating pollution or issuing warnings for heavy rain or flash flooding.


2013 ◽  
Vol 62 (7) ◽  
pp. 3019-3038 ◽  
Author(s):  
Tanumay Datta ◽  
N. Ashok Kumar ◽  
A. Chockalingam ◽  
B. Sundar Rajan

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Xiao Kong ◽  
Jianing Zhuang ◽  
Liyan Zhu ◽  
Feng Ding

AbstractTo fully understand the kinetics of graphene growth, large-scale atomic simulations of graphene islands evolution up to macro sizes (i.e., graphene islands of a few micrometers or with billions of carbon atoms) during growth and etching is essential, but remains a great challenge. In this paper, we developed a low computational cost large-scale kinetic Monte Carlo (KMC) algorithm, which includes all possible events of carbon attachments and detachments on various edge sites of graphene islands. Such a method allows us to simulate the evolution of graphene islands with sizes up to tens of micrometers during either growth or etching with a single CPU core. With this approach and the carefully fitted parameters, we have reproduced the experimentally observed evolution of graphene islands during both growth or etching on Pt(111) surface, and revealed more atomic details of graphene growth and etching. Based on the atomic simulations, we discovered a complementary relationship of graphene growth and etching—the route of graphene island shape evolution during growth is exactly the same as that of the etching of a hole in graphene and that of graphene island etching is exactly same as that of hole growth. The complementary relation brings us a basic principle to understand the growth and etching of graphene, and other 2D materials from atomic scale to macro size and the KMC algorithm is expected to be further developed into a standard simulation package for investigating the growth mechanism of 2D materials on various substrates.


2021 ◽  
Author(s):  
Xuebin Zhao ◽  
Andrew Curtis ◽  
Xin Zhang

<p>Seismic travel time tomography is used widely to image the Earth's interior structure and to infer subsurface properties. Tomography is an inverse problem, and computationally expensive nonlinear inverse methods are often deployed in order to understand uncertainties in the tomographic results. Monte Carlo sampling methods estimate the posterior probability distribution which describes the solution to Bayesian tomographic problems, but they are computationally expensive and often intractable for high dimensional model spaces and large data sets due to the curse of dimensionality. We therefore introduce a new method of variational inference to solve Bayesian seismic tomography problems using optimization methods, while still providing fully nonlinear, probabilistic results. The new method, known as normalizing flows, warps a simple and known distribution (for example a Uniform or Gaussian distribution) into an optimal approximation to the posterior distribution through a chain of invertible transforms. These transforms are selected from a library of suitable functions, some of which invoke neural networks internally. We test the method using both synthetic and field data. The results show that normalizing flows can produce similar mean and uncertainty maps to those obtained from both Monte Carlo and another variational method (Stein varational gradient descent), at significantly decreased computational cost. In our tomographic tests, normalizing flows improves both accuracy and efficiency, producing maps of UK surface wave speeds and their uncertainties at the finest resolution and the lowest computational cost to-date, allowing results to be interrogated efficiently and quantitatively for subsurface structure.</p>


2020 ◽  
Author(s):  
Lars Gebraad ◽  
Andrea Zunino ◽  
Andreas Fichtner ◽  
Klaus Mosegaard

<div>We present a framework to solve geophysical inverse problems using the Hamiltonian Monte Carlo (HMC) method, with a focus on Bayesian tomography. Recent work in the geophysical community has shown the potential for gradient-based Monte Carlo sampling for a wide range of inverse problems across several fields.</div><div> </div><div>Many high-dimensional (non-linear) problems in geophysics have readily accessible gradient information which is unused in classical probabilistic inversions. Using HMC is a way to help improve traditional Monte Carlo sampling while increasing the scalability of inference problems, allowing access to uncertainty quantification for problems with many free parameters (>10'000). The result of HMC sampling is a collection of models representing the posterior probability density function, from which not only "best" models can be inferred, but also uncertainties and potentially different plausible scenarios, all compatible with the observed data. However, the amount of tuning parameters required by HMC, as well as the complexity of existing statistical modeling software, has limited the geophysical community in widely adopting a specific tool for performing efficient large-scale Bayesian inference.</div><div> </div><div>This work attempts to make a step towards filling that gap by providing an HMC sampler tailored for geophysical inverse problems (by e.g. supplying relevant priors and visualizations) combined with a set of different forward models, ranging from elastic and acoustic wave propagation to magnetic anomaly modeling, traveltimes, etc.. The framework is coded in the didactic but performant languages Julia and Python, with the possibility for the user to combine their own forward models, which are linked to the sampler routines by proper interfaces. In this way, we hope to illustrate the usefulness and potential of HMC in Bayesian inference. Tutorials featuring an array of physical experiments are written with the aim of both showcasing Bayesian inference and successful HMC usage. It additionally includes examples on how to speed up HMC e.g. with automated tuning techniques and GPU computations.</div>


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Mahdi Shadab Far ◽  
Yuan Wang

Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic nature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed the development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional methods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient. However, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to handle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low failure probabilities using a small number of samples in conjunction with the Monte Carlo method. This revised approach was then presented in a step-by-step flowchart, for the purpose of easy programming and implementation.


Author(s):  
Judhajit Roy ◽  
Nianfeng Wan ◽  
Angshuman Goswami ◽  
Ardalan Vahidi ◽  
Paramsothy Jayakumar ◽  
...  

A new framework for route guidance, as part of a path decision support tool, for off-road driving scenarios is presented in this paper. The algorithm accesses information gathered prior to and during a mission which are stored as layers of a central map. The algorithm incorporates a priori knowledge of the low resolution soil and elevation information and real-time high-resolution information from on-board sensors. The challenge of high computational cost to find the optimal path over a large-scale high-resolution map is mitigated by the proposed hierarchical path planning algorithm. A dynamic programming (DP) method generates the globally optimal path approximation based on low-resolution information. The optimal cost-to-go from each grid cell to the destination is calculated by back-stepping from the target and stored. A model predictive control algorithm (MPC) operates locally on the vehicle to find the optimal path over a moving radial horizon. The MPC algorithm uses the stored global optimal cost-to-go map in addition to high resolution and locally available information. Efficacy of the developed algorithm is demonstrated in scenarios simulating static and moving obstacles avoidance, path finding in condition-time-variant environments, eluding adversarial line of sight detection, and connected fleet cooperation.


2021 ◽  
Vol 21 (12) ◽  
pp. 3789-3807
Author(s):  
Dimitra M. Salmanidou ◽  
Joakim Beck ◽  
Peter Pazak ◽  
Serge Guillas

Abstract. The potential of a full-margin rupture along the Cascadia subduction zone poses a significant threat over a populous region of North America. Previous probabilistic tsunami hazard assessment studies produced hazard curves based on simulated predictions of tsunami waves, either at low resolution or at high resolution for a local area or under limited ranges of scenarios or at a high computational cost to generate hundreds of scenarios at high resolution. We use the graphics processing unit (GPU)-accelerated tsunami simulator VOLNA-OP2 with a detailed representation of topographic and bathymetric features. We replace the simulator by a Gaussian process emulator at each output location to overcome the large computational burden. The emulators are statistical approximations of the simulator's behaviour. We train the emulators on a set of input–output pairs and use them to generate approximate output values over a six-dimensional scenario parameter space, e.g. uplift/subsidence ratio and maximum uplift, that represent the seabed deformation. We implement an advanced sequential design algorithm for the optimal selection of only 60 simulations. The low cost of emulation provides for additional flexibility in the shape of the deformation, which we illustrate here considering two families – buried rupture and splay-faulting – of 2000 potential scenarios. This approach allows for the first emulation-accelerated computation of probabilistic tsunami hazard in the region of the city of Victoria, British Columbia.


Sign in / Sign up

Export Citation Format

Share Document