Practical Application of Integrated Simulation Technologies for Asset Development and Surface Facilities Design of Gas Fields

2021 ◽  
Author(s):  
Mikhail Zhuravlev ◽  
Anastasiia Novikova ◽  
Aleksandra Cherkasova ◽  
Dmitry Shakhov ◽  
Alexander Kharkovsky ◽  
...  

Abstract The main goal of this paper is to describe the automation process for asset design solutions assessment in accordance with the expected production levels in dynamics. The integrated model contains embedded sub-models (various assessment elements, such as pipeline networks, compression facilities, gas treatment units, reservoir simulation models for production profiles simulation and an economic model to obtain an instant investment estimate). A continuous data flow between all the component models provides a quick assessment of different variables influence on the final efficiency of the integrated asset development option; this approach makes possible the rapid expansion of options range as well as the increase in analysis depth. We describe this approach on the example of the gas assets group development project, which includes the integration of following part of surface facilities: pipeline networks (gathering system) for well pads with the corresponding booster compressor stations and transport network to deliver well product to gas process unit. The work shows the recommendations about how to set up the optimal configuration of an integrated model (type and composition of sub-models, linking algorithms, data exchange directions, etc.) to solve various issues of long-term planning. In addition, we show the example of standardizing the process of managing the sub- models to provide the integrated model fast update when new production data arrives or when the surface facilities concept is changed and to make the approach transfer to other close projects easier. The novelty of the work lies in the creation of a unique approach to solve the issues of conceptual design by flexible configuration of an integrated model for specific tasks. This approach includes processing of production data different formats, the ability to connect an economic model to obtain the instant investment assessment of surface facilities option within comprehensive analysis. In addition, it includes the ability to connect detailed models of the gas-processing unit and booster compressor station with prospective economic efficiency assessment in accordance with the production profiles updates. The integrated model example and overall approach that we provide in this paer is unique due to the following factors: – "flexibility" of the model, which changes its appearance depending on the tasks. – prompt update of the economic indicators of the project. – clear accounting of transport and process facilities (use of detailed models for pipeline and processing systems (including booster compressor stations).

Author(s):  
Yue Xiang ◽  
Peng Wang ◽  
Bo Yu ◽  
Dongliang Sun

The numerical simulation efficiency of large-scale natural gas pipeline network is usually unsatisfactory. In this paper, Graphics Processing Unit (GPU)-accelerated hydraulic simulations for large-scale natural gas pipeline networks are presented. First, based on the Decoupled Implicit Method for Efficient Network Simulation (DIMENS) method, presented in our previous study, a novel two-level parallel simulation process and the corresponding parallel numerical method for hydraulic simulations of natural gas pipeline networks are proposed. Then, the implementation of the two-level parallel simulation in GPU is introduced in detail. Finally, some numerical experiments are provided to test the performance of the proposed method. The results show that the proposed method has notable speedup. For five large-scale pipe networks, compared with the well-known commercial simulation software SPS, the speedup ratio of the proposed method is up to 57.57 with comparable calculation accuracy. It is more inspiring that the proposed method has strong adaptability to the large pipeline networks, the larger the pipeline network is, the larger speedup ratio of the proposed method is. The speedup ratio of the GPU method approximately linearly depends on the total discrete points of the network.


2021 ◽  
Author(s):  
Rena Alia Ramdzani ◽  
Oluwole A. Talabi ◽  
Adeline Siaw Hui Chua ◽  
Edwin Lawrence

Abstract Field X located in offshore South East Asia, is a deepwater, turbidite natural gas greenfield currently being developed using a subsea tieback production system. It is part of a group of fields anticipated to be developed together as a cluster. Due to the nature of this development, several key challenges were foreseen: i) subsurface uncertainty ii) production network impact on system deliverability and flow assurance iii) efficient use of high frequency data in managing production. The objective of this study was to demonstrate a flexible and robust methodology to address these challenges by integrating multiple realizations of the reservoir model with surface network models and showing how this could be link to "live" production data in the future. This paper describes the development and deployment of the solutions to overcome those challenges. Furthermore, the paper describes the results and key observations for further recommendation in moving forward to field digitalization. The process started with a quality check of the base case dynamic reservoir model to improve performance and enable multiple realization runs in a reasonable timeframe. This was followed by sensitivity and uncertainty analysis to obtain 10 realizations of the subsurface model which were integrated with the steady-state surface network model. Optimization under uncertainty was then performed on the integrated model to evaluate three illustrative development scenarios. To demonstrate extensibility, two additional candidate reservoirs for future development were also tied in to the system and modelled as a single integrated asset model to meet the anticipated gas delivery targets. Next, the subsurface model was integrated with a multiphase transient network model to show how it can be used to evaluate the risk of hydrate formation along the pipeline during planned production start-up. As a final step, in-built application programming interface (API) in the integration software was used to perform automation, enabling the integrated model to be activated and run automatically while being updated with sample "live" production data. At the conclusion of the study, the reservoir simulation performance was improved, reducing runtime by a factor of four without significant change in base case results. The results of the coupled reservoir to steady-state network simulation and optimization showed that the network could constrain reservoir deliverability by up to 4% in all realizations due to back pressure, and the most optimum development scenario was to delay first gas production and operate with shorter duration at high separator pressure. With the additional reservoirs in the integrated model, the production plateau could be extended up to 15 years beyond the base case without exceeding the specified water handling limit. For hydrates risk analysis, the differences between hydrate formation and fluid temperature indicated there was a potential risk of hydrate formation, which could be reduced by increasing inhibitor concentration. Finally, the automation process was successfully tested with sample data to generate updated production forecast profiles as the "new" production data was fed into the database, enabling immediate analysis. This study demonstrated an approach to improve forecasting and scenario evaluation by using multiple realizations of the reservoir model coupled to a surface network. The study also demonstrated that this integrated model can be carried forward to improve management of the field in the future when combined with "live" data and automation logic to create a foundation for a digital field deployment.


2013 ◽  
Vol 16 (04) ◽  
pp. 412-422
Author(s):  
A.M.. M. Farid ◽  
Ahmed H. El-Banbi ◽  
A.A.. A. Abdelwaly

Summary The depletion performance of gas/condensate reservoirs is highly influenced by changes in fluid composition below the dewpoint. The long-term prediction of condensate/gas reservoir behavior is therefore difficult because of the complexity of both composition variation and two-phase-flow effects. In this paper, an integrated model was developed to simulate gas-condensate reservoir/well behavior. The model couples the compositional material balance or the generalized material-balance equations for reservoir behavior, the two-phase pseudo integral pressure for near-wellbore behavior, and outflow correlations for wellbore behavior. An optimization algorithm was also used with the integrated model so it can be used in history-matching mode to estimate original gas in place (OGIP), original oil in place (OOIP), and productivity-index (PI) parameters for gas/condensate wells. The model also can be used to predict the production performance for variable tubinghead pressure (THP) and variable production rate. The model runs fast and requires minimal input. The developed model was validated by use of different simulation cases generated with a commercial compositional reservoir simulator for a variety of reservoir and well conditions. The results show a good agreement between the simulation cases and the integrated model. After validating the integrated model against the simulated cases, the model was used to analyze production data for a rich-gas/condensate field (initial condensate/gas ratio of 180 bbl/ MMscf). THP data for four wells were used along with basic reservoir and production data to obtain original fluids in place and PIs of the wells. The estimated parameters were then used to forecast the gas and condensate production above and below the dewpoint. The model is also capable of predicting reservoir pressure, bottomhole flowing pressure, and THP and can account for completion changes when they occur.


2018 ◽  
Vol 1 (1) ◽  
pp. 39-46
Author(s):  
Victor Richardo ◽  
Dewi Agustini Santoso ◽  
Tita Talitha

The design of the layout aims to optimize aspects of productivity, processing time, worker fatigue, cost & distance of movement of material on the production floor. Optimizations regarding material displacement distance typically focus on minimizing the distance between the production units within the manufacturing company. This study conducted a case study on UD Utama Tires which is one of the companies manufacturing retread tire trucks located in Semarang. The company has crossmovement related problems between production units and departmental layout that is not in line with the production process flow, while the availability of production space is only 30x15 meters and there are 12 departments. The observation result shows the distance of material transfer in the heat-processing unit is 111.96 m and the cold-processing unit is 100.63. Optimization needs to be done to minimize the material distance between the units of the process. Blocplan algorithm was chosen as the method of arranging the company layout and obtained the result of 20 alternative proposed new layout to then selected one of the proposals with the highest layout score. The result of this research is the 15th proposed alternative with the highest layout-score value is 0.82, so it can be concluded that the optimum model can minimize the distance of the material production movement by 63.8m in the heat-processing unit with the effectiveness rate 43.02% and 69.1m in the process unit cold with effectiveness rate 31,33%


2010 ◽  
Vol 113-116 ◽  
pp. 199-202
Author(s):  
Wen Xian Jiao

The environment problem in Hei River Basin of Northwestern China, which was caused by the interactions of natural and human systems, was very complex. Only the development and application of integrated tools could we better describe and analyse the ongoing process. In this paper, we introduced the Spatial Modeling Environment (SME) as a powerful tool to simplify the integrated model building. After describing the funtion and architecture of SME, we discussed its application perspective in Hei River Basin. It appeared that in order to achieve an integrated ecologicl economic model which could explore the endogenous interaction between socio-economic and ecological dynamics, researchers should identify the main human factors and spatialize them to make natural and human factors have an identical and fixed spatial-temporal scale. The work of identifying human factors in environment impact assesement was very complex. We introduced the IPAT identity as a useful freamwork for identifying the main human factors.


2016 ◽  
Vol 14 (02) ◽  
pp. 1641008 ◽  
Author(s):  
Dmitry Suplatov ◽  
Nina Popova ◽  
Sergey Zhumatiy ◽  
Vladimir Voevodin ◽  
Vytas Švedas

Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads — one for task management and communication, and another for subtask execution — are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .


Sign in / Sign up

Export Citation Format

Share Document