scholarly journals Bayesian seismic inversion for stratigraphic horizon, lithology, and fluid prediction

Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. R207-R221
Author(s):  
Odd Kolbjørnsen ◽  
Arild Buland ◽  
Ragnar Hauge ◽  
Per Røe ◽  
Abel Onana Ndingwan ◽  
...  

We have developed an efficient methodology for Bayesian prediction of lithology and pore fluid, and layer-bounding horizons, in which we include and use spatial geologic prior knowledge such as vertical ordering of stratigraphic layers, possible lithologies and fluids within each stratigraphic layer, and layer thicknesses. The solution includes probabilities for lithologies and fluids and horizons and their associated uncertainties. The computational cost related to the inversion of large-scale, spatially coupled models is a severe challenge. Our approach is to evaluate all possible lithology and fluid configurations within a local neighborhood around each sample point and combine these into a consistent result for the complete trace. We use a one-step nonstationary Markov prior model for lithology and fluid probabilities. This enables prediction of horizon times, which we couple laterally to decrease the uncertainty. We have tested the algorithm on a synthetic case, in which we compare the inverted lithology and fluid probabilities to results from other algorithms. We have also run the algorithm on a real case, in which we find that we can make high-resolution predictions of horizons, even for horizons within tuning distance from each other. The methodology gives accurate predictions and has a performance making it suitable for full-field inversions.

2016 ◽  
Author(s):  
R. J. Haarsma ◽  
M. Roberts ◽  
P. L. Vidale ◽  
C. A. Senior ◽  
A. Bellucci ◽  
...  

Abstract. Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest the possibility for significant changes in both large-scale aspects of circulation, as well as improvements in small-scale processes and extremes. However, such high resolution global simulations at climate time scales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centers and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other MIPs. Increases in High Performance Computing (HPC) resources, as well as the revised experimental design for CMIP6, now enables a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility to extend to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulation. HighResMIP thereby focuses on one of the CMIP6 broad questions: “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. R185-R195 ◽  
Author(s):  
Daniel O. Pérez ◽  
Danilo R. Velis ◽  
Mauricio D. Sacchi

A new inversion method to estimate high-resolution amplitude-versus-angle attributes (AVA) attributes such as intercept and gradient from prestack data is presented. The proposed technique promotes sparse-spike reflectivities that, when convolved with the source wavelet, fit the observed data. The inversion is carried out using a hybrid two-step strategy that combines fast iterative shrinkage-thresholding algorithm (FISTA) and a standard least-squares (LS) inversion. FISTA, which can be viewed as an extension of the classical gradient algorithm, provides sparse solutions by minimizing the misfit between the modeled and the observed data, and the [Formula: see text]-norm of the solution. FISTA is used to estimate the location in time of the main reflectors. Then, LS is used to retrieve the appropriate reflectivity amplitudes that honor the data. FISTA, like other iterative solvers for [Formula: see text]-norm regularization, does not require matrices in explicit form, making it easy to apply, economic in computational terms, and adequate for solving large-scale problems. As a consequence, the FISTA+LS strategy represents a simple and cost-effective new procedure to solve the AVA inversion problem. Results on synthetic and field data show that the proposed hybrid method can obtain high-resolution AVA attributes from noisy observations, making it an interesting alternative to conventional methods.


2016 ◽  
Vol 9 (11) ◽  
pp. 4185-4208 ◽  
Author(s):  
Reindert J. Haarsma ◽  
Malcolm J. Roberts ◽  
Pier Luigi Vidale ◽  
Catherine A. Senior ◽  
Alessio Bellucci ◽  
...  

Abstract. Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950–2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, “what are the origins and consequences of systematic model biases?”, but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.


2021 ◽  
Author(s):  
Gareth Davies ◽  
Rikki Weber ◽  
Kaya Wilson ◽  
Phil Cummins

Offshore Probabilistic Tsunami Hazard Assessments (offshore PTHAs) provide large-scale analyses of earthquake-tsunami frequencies and uncertainties in the deep ocean, but do not provide high-resolution onshore tsunami hazard information as required for many risk-management applications. To understand the implications of an offshore PTHA for the onshore hazard at any site, in principle the tsunami inundation should be simulated locally for every scenario in the offshore PTHA. In practice this is rarely feasible due to the computational expense of inundation models, and the large number of scenarios in offshore PTHAs. Monte-Carlo methods offer a practical and rigorous alternative for approximating the onshore hazard, using a random subset of scenarios. The resulting Monte-Carlo errors can be quantified and controlled, enabling high-resolution onshore PTHAs to be implemented at a fraction of the computational cost. This study develops novel Monte-Carlo sampling approaches for offshore-to-onshore PTHA. Modelled offshore PTHA wave heights are used to preferentially sample scenarios that have large offshore waves near an onshore site of interest. By appropriately weighting the scenarios, the Monte-Carlo errors are reduced without introducing any bias. The techniques are applied to a high-resolution onshore PTHA for the island of Tongatapu in Tonga. In this region, the new approaches lead to efficiency improvements equivalent to using 4-18 times more random scenarios, as compared with stratified-sampling by magnitude, which is commonly used for onshore PTHA. The greatest efficiency improvements are for rare, large tsunamis, and for calculations that represent epistemic uncertainties in the tsunami hazard. To facilitate the control of Monte-Carlo errors in practical applications, this study also provides analytical techniques for estimating the errors both before and after inundation simulations are conducted. Before inundation simulation, this enables a proposed Monte-Carlo sampling scheme to be checked, and potentially improved, at minimal computational cost. After inundation simulation, it enables the remaining Monte-Carlo errors to be quantified at onshore sites, without additional inundation simulations. In combination these techniques enable offshore PTHAs to be rigorously transformed into onshore PTHAs, with full characterisation of epistemic uncertainties, while controlling Monte-Carlo errors.


2009 ◽  
Vol 137 (9) ◽  
pp. 2736-2757 ◽  
Author(s):  
Arindam Chakraborty ◽  
T. N. Krishnamurti

Abstract This study addresses seasonal forecasts of rains over India using the following components: high-resolution rain gauge–based rainfall data covering the years 1987–2001, rain-rate initialization, four global atmosphere–ocean coupled models, a regional downscaling of the multimodel forecasts, and a multimodel superensemble that includes a training and a forecast phase at the high resolution over the internal India domain. The results of monthly and seasonal forecasts of rains for the member models and for the superensemble are presented here. The main findings, assessed via the use of RMS error, anomaly correlation, equitable threat score, and ranked probability skill score, are (i) high forecast skills for the downscaled superensemble-based seasonal forecasts compared to the forecasts from the direct use of large-scale model forecasts were possible; (ii) very high scores for rainfall forecasts have been noted separately for dry and wet years, for different regions over India and especially for heavier rains in excess of 15 mm day−1; and (iii) the superensemble forecast skills exceed that of the benchmark observed climatology. The availability of reliable measures of high-resolution rain gauge–based rainfall was central for this study. Overall, the proposed algorithms, added together, show very promising results for the prediction of monsoon rains on the seasonal time scale.


2017 ◽  
Vol 10 (9) ◽  
pp. 3567-3589 ◽  
Author(s):  
Simon F. B. Tett ◽  
Kuniko Yamazaki ◽  
Michael J. Mineter ◽  
Coralia Cartis ◽  
Nathan Eizenberg

Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss–Newton line-search algorithm: (1) a standard Gauss–Newton algorithm in which, in each iteration, all parameters were perturbed and (2) a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution.For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss–Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being randomly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss–Newton algorithm scales between P and P2, where P is the number of parameters being calibrated.For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge.These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm.


2017 ◽  
Author(s):  
Simon F. B. Tett ◽  
Kuniko Yamazaki ◽  
Michael J. Mineter ◽  
Coralia Cartis ◽  
Nathan Eizenberg

Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss-Newton line-search algorithm. 1) A standard Gauss-Newton algorithm in which, in each iteration, all parameters were perturbed. 2) A randomized block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimized used multiple large-scale observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (3rd Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution. For the HadAM3 model, cases with seven and fourteen parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and ran for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss-Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm with 6 parameters being randomly perturbed about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed algorithm performance was poor. Our results suggest the computational cost for the Gauss-Newton algorithm scales between P and P2 where P is the number of parameters being calibrated. For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand tuned configuration. An algorithm in which six out of thirteen parameters were randomly perturbed failed to converge. These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human driven trial and error. However, convergence and costs are, likely, sensitive to details of the algorithm.


Author(s):  
Judhajit Roy ◽  
Nianfeng Wan ◽  
Angshuman Goswami ◽  
Ardalan Vahidi ◽  
Paramsothy Jayakumar ◽  
...  

A new framework for route guidance, as part of a path decision support tool, for off-road driving scenarios is presented in this paper. The algorithm accesses information gathered prior to and during a mission which are stored as layers of a central map. The algorithm incorporates a priori knowledge of the low resolution soil and elevation information and real-time high-resolution information from on-board sensors. The challenge of high computational cost to find the optimal path over a large-scale high-resolution map is mitigated by the proposed hierarchical path planning algorithm. A dynamic programming (DP) method generates the globally optimal path approximation based on low-resolution information. The optimal cost-to-go from each grid cell to the destination is calculated by back-stepping from the target and stored. A model predictive control algorithm (MPC) operates locally on the vehicle to find the optimal path over a moving radial horizon. The MPC algorithm uses the stored global optimal cost-to-go map in addition to high resolution and locally available information. Efficacy of the developed algorithm is demonstrated in scenarios simulating static and moving obstacles avoidance, path finding in condition-time-variant environments, eluding adversarial line of sight detection, and connected fleet cooperation.


2019 ◽  
Author(s):  
Wout Bittremieux ◽  
Kris Laukens ◽  
William Stafford Noble

AbstractOpen modification searching (OMS) is a powerful search strategy to identify peptides with any type of modification. OMS works by using a very wide precursor mass window to allow modified spectra to match against their unmodified variants, after which the modification types can be inferred from the corresponding precursor mass differences. A disadvantage of this strategy, however, is the large computational cost, because each query spectrum has to be compared against a multitude of candidate peptides.We have previously introduced the ANN-SoLo tool for fast and accurate open spectral library searching. ANN-SoLo uses approximate nearest neighbor indexing to speed up OMS by selecting only a limited number of the most relevant library spectra to compare to an unknown query spectrum. Here we demonstrate how this candidate selection procedure can be further optimized using graphics processing units. Additionally, we introduce a feature hashing scheme to convert high-resolution spectra to low-dimensional vectors. Based on these algorithmic advances, along with low-level code optimizations, the new version of ANN-SoLo is up to an order of magnitude faster than its initial version. This makes it possible to efficiently perform open searches on a large scale to gain a deeper understanding about the protein modification landscape. We demonstrate the computational efficiency and identification performance of ANN-SoLo based on a large data set of the draft human proteome.ANN-SoLo is implemented in Python and C++. It is freely available under the Apache 2.0 license at https://github.com/bittremieux/ANN-SoLo.


2016 ◽  
Vol 29 (23) ◽  
pp. 8589-8610 ◽  
Author(s):  
Colin M. Zarzycki

Abstract Tropical cyclones (TCs), particularly those that are intense and/or slow moving, induce sea surface temperature (SST) reductions along their tracks (commonly referred to as cold wakes) that provide a negative feedback on storm energetics by weakening surface enthalpy fluxes. While computing gains have allowed for simulated TC intensity to increase in global climate models as a result of increased horizontal resolution, many configurations utilize prescribed, noninteractive SSTs as a surface boundary condition to minimize computational cost and produce more accurate TC climatologies. Here, an idealized slab ocean is coupled to a 0.25° variable-resolution version of the Community Atmosphere Model (CAM) to improve closure of the surface energy balance and reproduce observed Northern Hemisphere cold wakes. This technique produces cold wakes that are realistic in structure and evolution and with magnitudes similar to published observations, without impacting large-scale SST climatology. Multimember ensembles show that the overall number of TCs generated by the model is reduced by 5%–9% when allowing for two-way air–sea interactions. TC intensity is greatly impacted; the strongest 1% of all TCs are 20–30 hPa (4–8 m s−1) weaker, and the number of simulated Saffir–Simpson category 4 and 5 TCs is reduced by 65% in slab ocean configurations. Reductions in intensity are in line with published thermodynamic theory. Additional offline experiments and sensitivity simulations demonstrate this response is both significant and robust. These results imply caution should be exercised when assessing high-resolution prescribed SST climate simulations capable of resolving intense TCs, particularly if discrete analysis of extreme events is desired.


Sign in / Sign up

Export Citation Format

Share Document