Identification of hydraulic conductivity via normal-score ensemble smoother with multiple data assimilation (NS-MDA) by assimilating hydraulic head or concentration

Author(s):  
Vanessa A. Godoy ◽  
Gian Franco Napa-García ◽  
Jaime Gómez-Hernández

<p>In this study, we compare the capability of the normal-score ensemble smoother with multiple data assimilation (NS-MDA) to identify hydraulic conductivity when it assimilates or hydraulic heads or concentrations. The study is performed in a two-dimensional numerical single point contamination experiment of an aquifer vertical cross section. Reference hydraulic conductivity maps are generated using geostatistics, and the groundwater flow and transport are solved to produce reference state variable data (hydraulic head and concentration). Assimilating data for the inverse problems are sampled in time at a limited number of points from the reference aquifer response. Prior variogram function of hydraulic conductivity is assumed and equally-likely realizations are generated. Stochastic inverse modelling is run using the NS-MDA for the identification of hydraulic conductivity by considering two scenarios: 1) assimilating hydraulic heads only and 2) assimilating concentrations only. Besides the qualitative analysis of the identified hydraulic conductivities maps, the results are quantified by using the average absolute bias (AAB) that represents a measure of accuracy between the reference values and the inversely identified values according each scenarios. The updated parameters reproduce the reference aquifer ones quite well for the two scenarios investigated, with better results for the scenario 1, indicating that NS-MDA is an effective approach to identifying hydraulic conductivities.</p>

2021 ◽  
Vol 150 ◽  
pp. 104722
Author(s):  
Thiago M.D. Silva ◽  
Sinesio Pesco ◽  
Abelardo Barreto Jr. ◽  
Mustafa Onur

Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3137
Author(s):  
Amine Tadjer ◽  
Reider B. Bratvold ◽  
Remus G. Hanea

Production forecasting is the basis for decision making in the oil and gas industry, and can be quite challenging, especially in terms of complex geological modeling of the subsurface. To help solve this problem, assisted history matching built on ensemble-based analysis such as the ensemble smoother and ensemble Kalman filter is useful in estimating models that preserve geological realism and have predictive capabilities. These methods tend, however, to be computationally demanding, as they require a large ensemble size for stable convergence. In this paper, we propose a novel method of uncertainty quantification and reservoir model calibration with much-reduced computation time. This approach is based on a sequential combination of nonlinear dimensionality reduction techniques: t-distributed stochastic neighbor embedding or the Gaussian process latent variable model and clustering K-means, along with the data assimilation method ensemble smoother with multiple data assimilation. The cluster analysis with t-distributed stochastic neighbor embedding and Gaussian process latent variable model is used to reduce the number of initial geostatistical realizations and select a set of optimal reservoir models that have similar production performance to the reference model. We then apply ensemble smoother with multiple data assimilation for providing reliable assimilation results. Experimental results based on the Brugge field case data verify the efficiency of the proposed approach.


2019 ◽  
Vol 131 ◽  
pp. 32-40
Author(s):  
Valeria Todaro ◽  
Marco D'Oria ◽  
Maria Giovanna Tanda ◽  
J. Jaime Gómez-Hernández

2019 ◽  
Author(s):  
Patrick N. Raanes ◽  
Andreas S. Stordal ◽  
Geir Evensen

Abstract. Ensemble randomized maximum likelihood (EnRML) is an iterative (stochastic) ensemble smoother, used for large and nonlinear inverse problems, such as history matching and data assimilation. Its current formulation is overly complicated and has issues with computational costs, noise, and covariance localization, even causing some practitioners to omit crucial prior information. This paper resolves these difficulties and streamlines the algorithm, without changing its output. These simplifications are achieved through the careful treatment of the linearizations and subspaces. For example, it is shown (a) how ensemble linearizations relate to average sensitivity, and (b) that the ensemble does not loose rank during updates. The paper also draws significantly on the theory of the (deterministic) iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks are obtained with the Lorenz-96 model with these two smoothers and the ensemble smoother using multiple data assimilation (ES-MDA).


Sign in / Sign up

Export Citation Format

Share Document