Fracture network connectivity devolution monitoring using transdimensional data assimilation

Author(s):  
Márk Somogyvári ◽  
Mohammadreza Jalali ◽  
Irina Engelhardt ◽  
Sebastian Reich

<p>In fractured aquifers, the permeability of open fractures could change over time due to precipitation effects and hydrothermal mineral growth. These processes could lead to the clogging of individual fractures and to the complete rearrangement of flow and transport pathways. Existing fractured rock characterization techniques often neglect this dynamicity and treat the reconstruction as a static inversion problem. The dynamic changes then later added to the model as an independent forward modeling task. In this research we provide a new data assimilation-based methodology to monitor and predict the dynamic changes of fractured aquifers due to mineralization in a quasi-real-time manner.</p><p>We formulate the inverse problem as a dynamic ‘hidden Markov process’ where the underlying model dynamicity is just partly known. Data assimilation methods are specifically designed to model such systems with strong uncertainties. A typical example for such problems is weather forecasting, where the combination of nonlinear processes and the partial observations make the forecasting challenging. To handle the strong random behavior, data assimilation approaches use stochastic algorithms. In this study we combine DFN-based stochastic aquifer reconstruction techniques with data assimilation algorithms to provide a dynamic inverse modelling framework for fractured reservoirs. We use the transdimensional DFN inversion of (Somogyvári et al., 2017) to initialize the data assimilation. This method uses a transdimensional MCMC approach to identify the most probable DFN geometries given the observations. Because the method is transdimensional it can adjust the number of model parameters, the number of fractures within the DFN. We developed this idea further by enhancing a particle filter algorithm with transdimensional model updates, allowing us to infer DFN models with changing fracture numbers.</p><p>We demonstrate the applicability of this new approach on outcrop-based synthetic fractured aquifer models. To create a dynamic DFN example, we simulate solute transport in a 2-D fracture network model using an advection-dispersion algorithm. We simulate fracture sealing in a stochastic way: we define a limit concentration above which the fractures could seal with a predefined probability at any timestep. At the initial timestep, a hydraulic tomography experiment is performed to capture the initial aquifer structure, which is then reconstructed by the transdimensional DFN inversion. At predefined timesteps hydraulic tests are performed at different parts of the aquifer, to obtain information about new state of the synthetic model. These observations are then processed by the data assimilation algorithm, which updates the underlying DFN models to better fit to the observations.</p>

SPE Journal ◽  
2020 ◽  
Vol 25 (05) ◽  
pp. 2729-2748
Author(s):  
Xiaopeng Ma ◽  
Kai Zhang ◽  
Chuanjin Yao ◽  
Liming Zhang ◽  
Jian Wang ◽  
...  

Summary Efficient identification and characterization of fracture networks are crucial for the exploitation of fractured media such as naturally fractured reservoirs. Using the information obtained from borehole logs, core images, and outcrops, fracture geometries can be roughly estimated. However, this estimation always has uncertainty, which can be decreased using inverse modeling. Following the Bayes framework, a common practice for inverse modeling is to sample from the posterior distribution of uncertain parameters, given the observational data. However, a challenge for fractured reservoirs is that the fractures often occur on different scales, and these fractures form an irregular network structure that is difficult to model and predict. In this work, a multiscale-parameterization method is developed to model the fracture network. Based on this parameterization method, we present a novel history-matching approach using a data-driven evolutionary algorithm to explore the Bayesian posterior space and decrease the uncertainties of the model parameters. Empirical studies on hypothetical and outcrop-based cases demonstrate that the proposed method can model and estimate the complex multiscale-fracture network on a limited computational budget.


SPE Journal ◽  
2021 ◽  
pp. 1-22
Author(s):  
Kai Zhang ◽  
Jinding Zhang ◽  
Xiaopeng Ma ◽  
Chuanjin Yao ◽  
Liming Zhang ◽  
...  

Summary Although researchers have applied many methods to history matching, such as Monte Carlo methods, ensemble-based methods, and optimization algorithms, history matching fractured reservoirs is still challenging. The key challenges are effectively representing the fracture network and coping with large amounts of reservoir-model parameters. With increasing numbers of fractures, the dimension becomes larger, resulting in heavy computational work in the inversion of fractures. This paper proposes a new characterization method for the multiscale fracture network, and a powerful dimensionality-reduction method by means of an autoencoder for model parameters. The characterization method of the fracture network is dependent on the length, orientation, and position of fractures, including large-scale and small-scale fractures. To significantly reduce the dimension of parameters, the deep sparse autoencoder (DSAE) transforms the input to the low-dimensional latent variables through encoding and decoding. Integrated with the greedy layer-wise algorithm, we set up a DSAE and then take the latent variables as optimization variables. The performance of the DSAE with fewer activating nodes is excellent because it reduces the redundant information of the input and avoids overfitting. Then, we adopt the ensemble smoother (ES) with multiple data assimilation (ES-MDA) to solve this minimization problem. We test our proposed method in three synthetic reservoir history-matching problems, compared with the no-dimensionality-reduction method and the principal-component analysis (PCA). The numerical results show that the characterization method integrated with the DSAE could simplify the fracture network, preserve the distribution of fractures during the update, and improve the quality of history matching naturally fractured reservoirs.


2019 ◽  
Vol 147 (5) ◽  
pp. 1429-1445 ◽  
Author(s):  
Yuchu Zhao ◽  
Zhengyu Liu ◽  
Fei Zheng ◽  
Yishuai Jin

Abstract We performed parameter estimation in the Zebiak–Cane model for the real-world scenario using the approach of ensemble Kalman filter (EnKF) data assimilation and the observational data of sea surface temperature and wind stress analyses. With real-world data assimilation in the coupled model, our study shows that model parameters converge toward stable values. Furthermore, the new parameters improve the real-world ENSO prediction skill, with the skill improved most by the parameter of the highest climate sensitivity (gam2), which controls the strength of anomalous upwelling advection term in the SST equation. The improved prediction skill is found to be contributed mainly by the improvement in the model dynamics, and second by the improvement in the initial field. Finally, geographic-dependent parameter optimization further improves the prediction skill across all the regions. Our study suggests that parameter optimization using ensemble data assimilation may provide an effective strategy to improve climate models and their real-world climate predictions in the future.


2021 ◽  
Author(s):  
Natalia Hanna ◽  
Estera Trzcina ◽  
Maciej Kryza ◽  
Witold Rohm

<p>The numerical weather model starts from the initial state of the Earth's atmosphere in a given place and time. The initial state is created by blending the previous forecast runs (first-guess), together with observations from different platforms. The better the initial state, the better the forecast; hence, it is worthy to combine new observation types. The GNSS tomography technique, developed in recent years, provides a 3-D field of humidity in the troposphere. This technique shows positive results in the monitoring of severe weather events. However, to assimilate the tomographic outputs to the numerical weather model, the proper observation operator needs to be built.</p><p>This study demonstrates the TOMOREF operator dedicated to the assimilation of the GNSS tomography‐derived 3‐D fields of wet refractivity in a Weather Research and Forecasting (WRF) Data Assimilation (DA) system. The new tool has been tested based on wet refractivity fields derived during a very intense precipitation event. The results were validated using radiosonde observations, synoptic data, ERA5 reanalysis, and radar data. In the presented experiment, a positive impact of the GNSS tomography data assimilation on the forecast of relative humidity (RH) was noticed (an improvement of root‐mean‐square error up to 0.5%). Moreover, within 1 hour after assimilation, the GNSS data reduced the bias of precipitation up to 0.1 mm. Additionally, the assimilation of GNSS tomography data had more influence on the WRF model than the Zenith Total Delay (ZTD) observations, which confirms the potential of the GNSS tomography data for weather forecasting.</p>


2012 ◽  
Vol 27 (1) ◽  
pp. 124-140 ◽  
Author(s):  
Bin Liu ◽  
Lian Xie

Abstract Accurately forecasting a tropical cyclone’s (TC) track and intensity remains one of the top priorities in weather forecasting. A dynamical downscaling approach based on the scale-selective data assimilation (SSDA) method is applied to demonstrate its effectiveness in TC track and intensity forecasting. The SSDA approach retains the merits of global models in representing large-scale environmental flows and regional models in describing small-scale characteristics. The regional model is driven from the model domain interior by assimilating large-scale flows from global models, as well as from the model lateral boundaries by the conventional sponge zone relaxation. By using Hurricane Felix (2007) as a demonstration case, it is shown that, by assimilating large-scale flows from the Global Forecast System (GFS) forecasts into the regional model, the SSDA experiments perform better than both the original GFS forecasts and the control experiments, in which the regional model is only driven by lateral boundary conditions. The overall mean track forecast error for the SSDA experiments is reduced by over 40% relative to the control experiments, and by about 30% relative to the GFS forecasts, respectively. In terms of TC intensity, benefiting from higher grid resolution that better represents regional and small-scale processes, both the control and SSDA runs outperform the GFS forecasts. The SSDA runs show approximately 14% less overall mean intensity forecast error than do the control runs. It should be noted that, for the Felix case, the advantage of SSDA becomes more evident for forecasts with a lead time longer than 48 h.


2012 ◽  
Vol 12 (12) ◽  
pp. 3719-3732 ◽  
Author(s):  
L. Mediero ◽  
L. Garrote ◽  
A. Chavez-Jimenez

Abstract. Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.


2014 ◽  
Vol 14 (23) ◽  
pp. 32233-32323 ◽  
Author(s):  
M. Bocquet ◽  
H. Elbern ◽  
H. Eskes ◽  
M. Hirtl ◽  
R. Žabkar ◽  
...  

Abstract. Data assimilation is used in atmospheric chemistry models to improve air quality forecasts, construct re-analyses of three-dimensional chemical (including aerosol) concentrations and perform inverse modeling of input variables or model parameters (e.g., emissions). Coupled chemistry meteorology models (CCMM) are atmospheric chemistry models that simulate meteorological processes and chemical transformations jointly. They offer the possibility to assimilate both meteorological and chemical data; however, because CCMM are fairly recent, data assimilation in CCMM has been limited to date. We review here the current status of data assimilation in atmospheric chemistry models with a particular focus on future prospects for data assimilation in CCMM. We first review the methods available for data assimilation in atmospheric models, including variational methods, ensemble Kalman filters, and hybrid methods. Next, we review past applications that have included chemical data assimilation in chemical transport models (CTM) and in CCMM. Observational data sets available for chemical data assimilation are described, including surface data, surface-based remote sensing, airborne data, and satellite data. Several case studies of chemical data assimilation in CCMM are presented to highlight the benefits obtained by assimilating chemical data in CCMM. A case study of data assimilation to constrain emissions is also presented. There are few examples to date of joint meteorological and chemical data assimilation in CCMM and potential difficulties associated with data assimilation in CCMM are discussed. As the number of variables being assimilated increases, it is essential to characterize correctly the errors; in particular, the specification of error cross-correlations may be problematic. In some cases, offline diagnostics are necessary to ensure that data assimilation can truly improve model performance. However, the main challenge is likely to be the paucity of chemical data available for assimilation in CCMM.


2015 ◽  
Vol 37 (1) ◽  
pp. 29-42
Author(s):  
Nguyen Thanh Don ◽  
Nguyen Van Que ◽  
Tran Quang Hung ◽  
Nguyen Hong Phong

Around the world, the data assimilation framework has been reported to be of great interest for weather forecasting, oceanography modeling and for shallow water flows particularly for flood model. For flood model this method is a power full tool to identify time-independent parameters (e.g. Manning coefficients and initial conditions) and time-dependent parameters (e.g. inflow). This paper demonstrates the efficiency of the method to identify time-dependent parameter: inflow discharge with a real complex case Red River. Firstly, we briefly discuss about current methods for determining flow rate which encompasses the new technologies, then present the ability to recover flow rate of this method. For the case of very long time series, a temporal strategy with time overlapping is suggested to decrease the amount of memory required. In addition, some different aspects of data assimilation are covered from this case.


Sign in / Sign up

Export Citation Format

Share Document