Multidisciplinary Analysis of Hydraulic Stimulation and Production Effects within the Niobrara and Codell Reservoirs, Wattenberg Field, Colorado: Part 2 — Analysis of Hydraulic Fracturing and Production

2021 ◽  
pp. 1-53
Author(s):  
Matthew Bray ◽  
Jakob Utley ◽  
Yanrui Ning ◽  
Angela Dang ◽  
Jacquelyn Daves ◽  
...  

Enhanced hydrocarbon recovery is essential for continued economic development of unconventional reservoirs. Our study focuses on dynamic characterization of the Niobrara and Codell Formations in Wattenberg Field through the development and analysis of a full integrated reservoir model. We demonstrate the effectiveness of hydraulic fracturing and production with two seismic monitor surveys, surface microseismic, completion data, and production data. The two monitor surveys were recorded after stimulation, and again after two years of production. Identification of reservoir deformation due to hydraulic fracturing and production improves reservoir models by mapping non-stimulated and non-producing zones. Monitoring these time-variant changes improves the prediction capability of reservoir models, which in turn leads to improved well and stage placement. We quantify dynamic reservoir changes with time-lapse P-wave seismic data utilizing pre-stack inversion, and velocity-independent layer stripping for velocity and attenuation changes within the Niobrara and Codell reservoirs. A 3D geomechanical model and production data are history matched, and a simulation is run for two years of production. Results are integrated with time-lapse seismic data to illustrate the effects of hydraulic fracturing and production. Our analyses illustrate that chalk facies have significantly higher hydraulic fracture efficiency and production performance than marl facies. Additionally, structural and hydraulic complexity associated with faults generate spatial variability in a well’s total production.

2021 ◽  
pp. 1-41
Author(s):  
Matthew Bray ◽  
Jacquelyn Daves ◽  
Daniel Brugioni ◽  
Asm Kamruzzaman ◽  
Tom Bratton ◽  
...  

In the Wattenberg Field, the Reservoir Characterization Project at the Colorado School of Mines and Occidental Petroleum Corporation (Oxy) (formerly the Anadarko Petroleum Corporation) collected time-lapse seismic data for characterization of changes in the reservoir caused by hydraulic fracturing and production in the Niobrara Formation and Codell Sandstone member of the Carlile Formation. We have acquired three multicomponent seismic surveys to understand the dynamic reservoir changes caused by hydraulic fracturing and production of 11 horizontal wells within a 1 mi2 section (the Wishbone Section). The time-lapse seismic survey acquisition occurred immediately after the wells were drilled, another survey after stimulation, and a third survey after two years of production. In addition, we integrate core, petrophysical properties, fault and fracture characteristics, as well as P-wave seismic data to illustrate reservoir properties prior to simulation and production. Core analysis indicates extensive amounts of bioturbation in zones of high total organic content (TOC). Petrophysical analysis of logs and core samples indicates that chalk intervals have high amounts of TOC (>2%) and the lowest amount of clay in the reservoir interval. Core petrophysical characterization included X-ray diffraction analysis, mercury intrusion capillary pressure, N2 gas adsorption, and field emission scanning electron microscopy. Reservoir fractures follow four regional orientations, and chalk facies contain higher fracture density than marl facies. Integration of these data assist in enhanced well targeting and reservoir simulation.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. KS207-KS217 ◽  
Author(s):  
Jeremy D. Pesicek ◽  
Konrad Cieślik ◽  
Marc-André Lambert ◽  
Pedro Carrillo ◽  
Brad Birkelo

We have determined source mechanisms for nine high-quality microseismic events induced during hydraulic fracturing of the Montney Shale in Canada. Seismic data were recorded using a dense regularly spaced grid of sensors at the surface. The design and geometry of the survey are such that the recorded P-wave amplitudes essentially map the upper focal hemisphere, allowing the source mechanism to be interpreted directly from the data. Given the inherent difficulties of computing reliable moment tensors (MTs) from high-frequency microseismic data, the surface amplitude and polarity maps provide important additional confirmation of the source mechanisms. This is especially critical when interpreting non-shear source processes, which are notoriously susceptible to artifacts due to incomplete or inaccurate source modeling. We have found that most of the nine events contain significant non-double-couple (DC) components, as evident in the surface amplitude data and the resulting MT models. Furthermore, we found that source models that are constrained to be purely shear do not explain the data for most events. Thus, even though non-DC components of MTs can often be attributed to modeling artifacts, we argue that they are required by the data in some cases, and can be reliably computed and confidently interpreted under favorable conditions.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


SPE Journal ◽  
2021 ◽  
Vol 26 (02) ◽  
pp. 1011-1031
Author(s):  
Gilson Moura Silva Neto ◽  
Ricardo Vasconcellos Soares ◽  
Geir Evensen ◽  
Alessandra Davolio ◽  
Denis José Schiozer

Summary Time-lapse-seismic-data assimilation has been drawing the reservoir-engineering community's attention over the past few years. One of the advantages of including this kind of data to improve the reservoir-flow models is that it provides complementary information compared with the wells' production data. Ensemble-based methods are some of the standard tools used to calibrate reservoir models using time-lapse seismic data. One of the drawbacks of assimilating time-lapse seismic data involves the large data sets, mainly for large reservoir models. This situation leads to high-dimensional problems that demand significant computational resources to process and store the matrices when using conventional and straightforward methods. Another known issue associated with the ensemble-based methods is the limited ensemble sizes, which cause spurious correlations between the data and the parameters and limit the degrees of freedom. In this work, we propose a data-assimilation scheme using an efficient implementation of the subspace ensemble randomized maximum likelihood (SEnRML) method with local analysis. This method reduces the computational requirements for assimilating large data sets because the number of operations scales linearly with the number of observed data points. Furthermore, by implementing it with local analysis, we reduce the memory requirements at each update step and mitigate the effects of the limited ensemble sizes. We test two local analysis approaches: one distance-based approach and one correlation-based approach. We apply these implementations to two synthetic time-lapse-seismic-data-assimilation cases, one 2D example, and one field-scale application that mimics some of the real-field challenges. We compare the results with reference solutions and with the known ensemble smoother with multiple data assimilation (ES-MDA) using Kalman gain distance-based localization. The results show that our method can efficiently assimilate time-lapse seismic data, leading to updated models that are comparable with other straightforward methods. The correlation-based local analysis approach provided results similar to the distance-based approach, with the advantage that the former can be applied to data and parameters that do not have specific spatial positions.


Sign in / Sign up

Export Citation Format

Share Document