Semiquantitative 4D seismic interpretation integrated with reservoir simulation: Application to the Norne field

2018 ◽  
Vol 6 (3) ◽  
pp. T601-T611
Author(s):  
Juliana Maia Carvalho dos Santos ◽  
Alessandra Davolio ◽  
Denis Jose Schiozer ◽  
Colin MacBeth

Time-lapse (or 4D) seismic attributes are extensively used as inputs to history matching workflows. However, this integration can potentially bring problems if performed incorrectly. Some of the uncertainties regarding seismic acquisition, processing, and interpretation can be inadvertently incorporated into the reservoir simulation model yielding an erroneous production forecast. Very often, the information provided by 4D seismic can be noisy or ambiguous. For this reason, it is necessary to estimate the level of confidence on the data prior to its transfer to the simulation model process. The methodology presented in this paper aims to diagnose which information from 4D seismic that we are confident enough to include in the model. Two passes of seismic interpretation are proposed: the first, intended to understand the character and quality of the seismic data and, the second, to compare the simulation-to-seismic synthetic response with the observed seismic signal. The methodology is applied to the Norne field benchmark case in which we find several examples of inconsistencies between the synthetic and real responses and we evaluate whether these are caused by a simulation model inaccuracy or by uncertainties in the actual observed seismic. After a careful qualitative and semiquantitative analysis, the confidence level of the interpretation is determined. Simulation model updates can be suggested according to the outcome from this analysis. The main contribution of this work is to introduce a diagnostic step that classifies the seismic interpretation reliability considering the uncertainties inherent in these data. The results indicate that a medium to high interpretation confidence can be achieved even for poorly repeated data.

SPE Journal ◽  
2010 ◽  
Vol 15 (04) ◽  
pp. 1077-1088 ◽  
Author(s):  
F.. Sedighi ◽  
K.D.. D. Stephen

Summary Seismic history matching is the process of modifying a reservoir simulation model to reproduce the observed production data in addition to information gained through time-lapse (4D) seismic data. The search for good predictions requires that many models be generated, particularly if there is an interaction between the properties that we change and their effect on the misfit to observed data. In this paper, we introduce a method of improving search efficiency by estimating such interactions and partitioning the set of unknowns into noninteracting subspaces. We use regression analysis to identify the subspaces, which are then searched separately but simultaneously with an adapted version of the quasiglobal stochastic neighborhood algorithm. We have applied this approach to the Schiehallion field, located on the UK continental shelf. The field model, supplied by the operator, contains a large number of barriers that affect flow at different times during production, and their transmissibilities are highly uncertain. We find that we can successfully represent the misfit function as a second-order polynomial dependent on changes in barrier transmissibility. First, this enables us to identify the most important barriers, and, second, we can modify their transmissibilities efficiently by searching subgroups of the parameter space. Once the regression analysis has been performed, we reduce the number of models required to find a good match by an order of magnitude. By using 4D seismic data to condition saturation and pressure changes in history matching effectively, we have gained a greater insight into reservoir behavior and have been able to predict flow more accurately with an efficient inversion tool. We can now determine unswept areas and make better business decisions.


2015 ◽  
Vol 3 (2) ◽  
pp. SP35-SP52 ◽  
Author(s):  
Zhen Yin ◽  
Milana Ayzenberg ◽  
Colin MacBeth ◽  
Tao Feng ◽  
Romain Chassagne

We have found that dynamic reservoir interpretation can be enhanced by directly correlating the seismic amplitudes from many repeated 4D seismic monitors to the field production and injection history from wells. This “well2seis” crosscorrelation was achieved by defining a linear relationship between the 4D seismic signals and changes in the cumulative fluid volumes at the wells. We also found that the distribution of the well2seis correlation attribute can reveal key reservoir connectivity features, such as the seal of faults, fluid pathways, and communication between neighboring compartments. It can therefore enhance dynamic reservoir description. Based on this enhanced interpretation, we have developed a workflow to close the loop between 4D seismic and reservoir engineering data. First, the reservoir model was directly updated using quantitative information extracted from multiple surveys, by positioning and placing known barriers or conduits to flow. After this process, a seismic-assisted history matching was applied using the well2seis attribute to honor data from the seismic and engineering domains, while remaining consistent with the fault interpretation. Compared to traditional history matching, that attempts to match individual seismic time-lapse amplitudes and production data, our approach used an attribute that condensed available data to effectively enhance the signal. In addition, the approach was observed to improve the history-matching efficiency as well as model predictability. The proposed methodology was applied to a North Sea-field, the production of which was controlled by fault compartmentalization. It successfully detected the communication pathways and sealing property of key faults that are known to be major factors in influencing reservoir development. After history matching, the desired loops were closed by efficiently updating the reservoir simulation model, and this was indicated by a 90% reduction in the misfit errors and 89% lowering of the corresponding uncertainty bounds.


2021 ◽  
Author(s):  
Bjørn Egil Ludvigsen ◽  
Mohan Sharma

Abstract Well performance calibration after history matching a reservoir simulation model ensures that the wells give realistic rates during the prediction phase. The calibration involves adjusting well model parameters to match observed production rates at specified backpressure(s). This process is usually very time consuming such that the traditional approaches using one reservoir model with hundreds of high productivity wells would take months to calibrate. The application of uncertainty-centric workflows for reservoir modeling and history matching results in many acceptable matches for phase rates and flowing bottom-hole pressure (BHP). This makes well calibration even more challenging for an ensemble of large number of simulation models, as the existing approaches are not scalable. It is known that Productivity Index (PI) integrates reservoir and well performance where most of the pressure drop happens in one to two grid blocks around well depending upon the model resolution. A workflow has been setup to fix transition by calibrating PI for each well in a history matched simulation model. Simulation PI can be modified by changing permeability-thickness (Kh), skin, or by applying PI multiplier as a correction. For a history matched ensemble with a range in water-cut and gas-oil ratio, the proposed workflow involves running flowing gradient calculations for a well corresponding to observed THP and simulated rates for different phases to calculate target BHP. A PI Multiplier is then calculated for that well and model that would shift simulation BHP to target BHP as local update to reduce the extent of jump. An ensemble of history matched models with a range in water-cut and gas-oil ratio have a variation in required BHPs unique to each case. With the well calibration performed correctly, the jump observed in rates while switching from history to prediction can be eliminated or significantly reduced. The prediction thus results in reliable rates if wells are run on pressure control and reliable plateau if the wells are run on group control. This reduces the risk of under/over-predicting ultimate hydrocarbon recovery from field and the project's cashflow. Also, this allows running sensitivities to backpressure, tubing design, and other equipment constraints to optimize reservoir performance and facilities design. The proposed workflow, which dynamically couple reservoir simulation and well performance modeling, takes a few seconds to run for a well, making it fit-for-purpose for a large ensemble of simulation models with a large number of wells.


2021 ◽  
Author(s):  
Mohamed Shams

Abstract This paper provides the field application of the bee colony optimization algorithm in assisting the history match of a real reservoir simulation model. Bee colony optimization algorithm is an optimization technique inspired by the natural optimization behavior shown by honeybees during searching for food. The way that honeybees search for food sources in the vicinity of their nest inspired computer science researchers to utilize and apply same principles to create optimization models and techniques. In this work the bee colony optimization mechanism is used as the optimization algorithm in the assisted the history matching workflow applied to a reservoir simulation model of WD-X field producing since 2004. The resultant history matched model is compared with with those obtained using one the most widely applied commercial AHM software tool. The results of this work indicate that using the bee colony algorithm as the optimization technique in the assisted history matching workflow provides noticeable enhancement in terms of match quality and time required to achieve a reasonable match.


2015 ◽  
Vol 3 (2) ◽  
pp. SP11-SP19 ◽  
Author(s):  
Oghogho Effiom ◽  
Robert Maskall ◽  
Edwin Quadt ◽  
Kazeem A. Lawal ◽  
Raphael Afolabi ◽  
...  

To improve the management of a Nigerian deep water field, two vintages of 4D data have been acquired since field start up in 2005. The first Nigerian 4D seismic (monitor-I) in water depths greater than 1000 m was taken in this field in 2008, and the second monitor (monitor-II) was acquired in 2012. Compared to monitor-I, better geometric repeatability was achieved in monitor-II as the lessons learned from monitor-I were incorporated to achieve better results. The final normalized root mean square of monitor-II fast-track volume was 12% compared to 25% for monitor-I. The improved quality is attributed to improvements in the acquisition methodology and prediction of the effects of currents. Seismic interpretation of the field revealed two distinct turbidite depositional settings: (1) An unconfined amalgamated lobe system with low relief, high net-to-gross reservoir sands that exhibit fairly homogeneous water flooding patterns on 4D and (2) an erosional canyon setting, filled with meander belts having a more complex 3D connectivity within and between the channels resulting in a challenging 4D interpretation. The time lapse data were instrumental for better understanding the reservoir architecture, enabling improved wells and reservoir management practices, the identification of infill opportunities, and more mature subsurface models. We evaluated the seismic acquisition and the 4D interpretation of the deepwater 4D seismic data, highlighting the merits of a multidisciplinary collaborative understanding to time-lapse seismic. At present, the value of information of the 4D monitor-II is conservatively estimated at 101 million United States dollars, equivalent to the cost of a well in this deepwater operating environment.


Sign in / Sign up

Export Citation Format

Share Document