Fast, Environmentally Sound and Efficient Well Clean-Up Operations: Lessons Learned and Best Practices from Operations Around the World

2021 ◽  
Author(s):  
Yakov Shumakov ◽  
Florian Hollaender ◽  
Alexander Zhandin

Abstract Well clean-up is one of the most complex operations performed at the wellsite today. During clean-up, a well is flowing for the first time after initial completion or workover operations through temporary surface facilities to either conduct a welltest or to simply condition the well before connecting it to production facilities. Currently, there are no practical recommendations available that would summarize clean-up experiences and guide operating companies through the process of efficiently planning well clean-up operations. Conventional well clean-up operations are inherently challenging owing to the requirements for accurate data measurements, safe handling and disposal of produced fluids (hydrocarbons, completion brine, water, and solids). Experience has shown that it is nearly impossible to perform well clean up within pre-defined constraints and target criteria without an appropriate design, equipment selection and operations planning to account for the specificities of each situation. Steady-state flow simulators have been the standard tool to model pressure and temperature changes along the wellbore and through temporary production system during well clean-up process. Those assume either final stabilized conditions or a limited number of intermediate ones and formed the basis for equipment selection. But this approach has critical limitations in modelling flowing well behavior and fast-changing flowing conditions, and therefore in assessing operational flow assurance risks and the dynamic capability of the surface plant to handle produced fluids. The paper describes in detail today's challenges during well clean-up operations that combine the need for operational safety, minimal environmental footprint and flow assurance considerations that have to be balanced with costs and production performance optimization. The paper provides practical recommendations and presents multiple case studies highlighting the results and lessons learned from applying a novel, unique workflow based on the application of a transient-multiphase flow simulator. Combined with modern well-testing equipment such as modern test separators, remotely actuated adjustable chokes or environmentally friendly fluid disposal techniques, such advanced design allows performing clean-up operations efficiently while remaining within time, rates, pressure or emissions limits.

2021 ◽  
Author(s):  
Abdulaziz Al-Qasim ◽  
Sharidah Alabduh ◽  
Muhannad Alabdullateef ◽  
Mutaz Alsubhi

Abstract Fiber-optic sensing (FOS) technology is gradually becoming a pervasive tool in the monitoring and surveillance toolkit for reservoir engineers. Traditionally, sensing with fiber optic technology in the form of distributed temperature sensing (DTS) or distributed acoustic sensing (DAS), and most recently distributed strain sensing (DSS), distributed flow sensing (DFS) and distributed pressure sensing (DPS) were done with the fiber being permanently clamped either behind the casing or production tubing. Distributed chemical sensing (DCS) is still in the development phase. The emergence of the composite carbon-rod (CCR) system that can be easily deployed in and out of a well, similar to wireline logging, has opened up a vista of possibilities to obtain many FOS measurements in any well without prior fiber-optic installation. Currently, combinations of distributed FOS data are being used for injection management, well integrity monitoring, well stimulation and production performance optimization, thermal recovery management, etc. Is it possible to integrate many of the distributed FOS measurements in the CCR or a hybrid combination with wireline to obtain multiple measurements with one FOS cable? Each one of FOS has its own use to get certain data, or combination of FOS can be used to make a further interpretation. This paper reviews the state of the art of the FOS technology and the gamut of current different applications of FOS data in the oil and gas (upstream) industry. We present some results of traditional FOS measurements for well integrity monitoring, assessing production and injection flow profile, cross flow behind casing, etc. We propose some nontraditional applications of the technology and suggest a few ways through. Which the technology can be deployed for obtaining some key reservoir description and dynamics data for reservoir performance optimization.


2021 ◽  
Author(s):  
Babalola Daramola

Abstract This publication presents how an oil asset unlocked idle production after numerous production upsets and a gas hydrate blockage. It also uses economics to justify facilities enhancement projects for flow assurance. Field F is an offshore oil field with eight subsea wells tied back to a third party FPSO vessel. Field F was shut down for turnaround maintenance in 2015. After the field was brought back online, one of the production wells (F5) failed to flow. An evaluation of the reservoir, well, and facilities data suggested that there was a gas hydrate blockage in the subsea pipeline between the well head and the FPSO vessel. A subsea intervention vessel was then hired to execute a pipeline clean-out operation, which removed the gas hydrate, and restored F5 well oil production. To minimise oil production losses due to flow assurance issues, the asset team evaluated the viability of installing a test pipeline and a second methanol umbilical as facilities enhancement projects. The pipeline clean-out operation delivered 5400 barrels of oil per day production to the asset. The feasibility study suggested that installing a second methanol umbilical and a test pipeline are economically attractive. It is recommended that the new methanol umbilical is installed to guarantee oil flow from F5 and future infill production wells. The test pipeline can be used to clean up new wells, to induce low pressure wells, and for well testing, well sampling, water salinity evaluation, tracer evaluation, and production optimisation. This paper presents production upset diagnosis and remediation steps actioned in a producing oil field, and aids the justification of methanol umbilical capacity upgrade and test pipeline installations as facilities enhancement projects. It also indicates that gas hydrate blockage can be prevented by providing adequate methanol umbilical capacity for timely dosing of oil production wells.


2011 ◽  
Author(s):  
Victor Gerardo Vallejo ◽  
Aciel Olivares ◽  
Pablo Crespo Hdez ◽  
Eduardo R. Roman ◽  
Claudio Rogerio Tigre Maia ◽  
...  

2009 ◽  
Vol 17 (1-2) ◽  
pp. 135-151 ◽  
Author(s):  
Guochun Shi ◽  
Volodymyr V. Kindratenko ◽  
Ivan S. Ufimtsev ◽  
Todd J. Martinez ◽  
James C. Phillips ◽  
...  

The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.


2021 ◽  
pp. 1-18
Author(s):  
Shaoqing Sun ◽  
David A. Pollitt

Summary Benchmarking the recovery factor and production performance of a given reservoir against applicable analogs is a key step in field development optimization and a prerequisite in understanding the necessary actions required to improve hydrocarbon recovery. Existing benchmarking methods are principally structured to solve specific problems in individual situations and, consequently, are difficult to apply widely and consistently. This study presents an alternative empirical analog benchmarking workflow that is based upon systematic analysis of more than 1,600 reservoirs from around the world. This workflow is designed for effective, practical, and repeatable application of analog analysis to all reservoir types, development scenarios, and production challenges. It comprises five key steps: (1) definition of problems and objectives; (2) parameterization of the target reservoir; (3) quantification of resource potential; (4) assessment of production performance; and (5) identification of best practices and lessons learned. Problems of differing nature and for different objectives require different sets of analogs. This workflow advocates starting with a broad set of parameters to find a wide range of analogs for quantification of resource potential, followed by a narrowly defined set of parameters to find relevant analogs for assessment of production performance. During subsequent analysis of the chosen analogs, the focus is on isolating specific critical issues and identifying a smaller number of applicable analogs that more closely match the target reservoir with the aim to document both best practices and lessons learned. This workflow aims to inform decisions by identifying the best-in-class performers and examining in detail what differentiates them. It has been successfully applied to improve hydrocarbon recovery for carbonate, clastic, and basement reservoirs globally. The case studies provided herein demonstrate that this workflow has real-world utility in the identification of upside recovery potential and specific actions that can be taken to optimize production and recovery.


2021 ◽  
Author(s):  
Zhihua Wang ◽  
Aqib Qureshi ◽  
Tarik A Abdelfattah ◽  
Joshua R Snitkoff

Abstract The re-development of a giant offshore field in the United Arab Emirates (UAE) consists predominantly of four artificial islands requiring in most cases extremely long horizontal laterals to reach the reservoir targets. Earlier SPE technical papers (1,2) have introduced the development, testing, qualification, and deployment of the plugged liner technology using the dissolvable plugged nozzles (DPNs). The use of DPN plugged liner technology has resulted in CAPEX savings and enhanced production performance. The benefits of DPN technology are its simplicity along with its cost effectiveness. However, the dissolvable material has some limitations, such as pressure rating and dissolution time, which are fluid chemistry dependent. To overcome these limits, a new Pressure Actuated Isolation Nozzle Assembly (PAINA) was developed as an alternative to the plugged liner tool for applications where a higher pressure rating is required, as well as on demand opening. Furthermore, the new PAINA also functions as a flow control device during injection and production, enhancing acid jetting effects during bullhead stimulation and reducing brine losses during liner installation. Liners with PAINAs can be run to TD similar to blank pipe: fluids can be circulated through the inside of the liner without the need for a wash pipe. Once on bottom, non-aqueous drilling fluid is displaced to brine without actuating the isolation mechanism. When the well is ready for production or injection, pressure is applied and the isolation mechanism is activated to establish communication between well and reservoir. These tools were successfully run as flow control devices in water-alternating-gas (WAG) pilot wells. The planning and execution of the initial application will be discussed, along with the tool development, qualification testing, and lessons learned. The key advantage of this technology is in extending plugged liner applications to cases where other pressure-operated tools are included as part of the liner lower completion. Pressure can be applied to the well multiple times without activating the isolation mechanism as long as the applied pressure is below the actuation pressure.


2020 ◽  
Author(s):  
Florian Hollaender ◽  
Mahmoud Basioni ◽  
Ahmed Yahya Al Blooshi ◽  
Ahmed Elmahdi ◽  
Sohdy Sayed ◽  
...  

2020 ◽  
Vol 13 (1) ◽  
pp. 195 ◽  
Author(s):  
Luis Paipa-Galeano ◽  
César A. Bernal-Torres ◽  
Luís Mauricio Agudelo Otálora ◽  
Yavar Jarrah Nezhad ◽  
Heither A. González-Blanco

Purpose: The purpose of this paper is to analyze the conditions under which continuous improvement practices are developed and to determine what success factors and barriers affect the sustainability of these practices in order to establish strategies that reduce the risk of failure of improvement proposals in companies.Design/methodology/approach: The paper presents a rigorous review of the success factors and barriers in the implementation of continuous improvement models in companies and a multiple case study in which four successful companies located in Bogota, Colombia, were compared using Bessant's maturity model.Findings: The results suggest the existence of systematic improvement processes in the four companies analysed in favour of the improvement of business competitiveness. After a convergence exercise between the success factors identified in the literature and the routines of the evaluation model used to identify the maturity of the companies in terms of improvement, five strategic fronts were identified to achieve sustainable improvement proposals:(1)have management commit to the improvement and guarantee resources, (2) define a methodology to implement, (3) facilitate and systematize the information on the interventions, (4) design training programs and incentives to encourage employee involvement, and (5) generate a verification and control system to provide real-time feedback on the progress of the improvement actions.Research limitations/implications: This research paper was limited by the analysis of four large Colombian companies, which did not allow the generalizability of findings. Therefore, the study offers interesting insights on the empirical evidence on the lessons learned from continuous improvement practices in order to support managers on better decision making and for the academics on better understanding continuous improvement drivers.Originality/value: The present investigation provides a conceptual framework for future studies related to the sustainability of continuous improvement in industry, approaching this topic from a theoretical and practical perspective.


Author(s):  
Fang Zhao

The previous chapters have included a comprehensive discussion of general issues concerning e-partnership management from both technology and people perspectives, and, continuing this theme, this chapter presents extended and systematic multiple case studies which allow a more profound exploration of the way in which companies have partnered in e-business. It also contains an in-depth examination of specific issues and problems raised in e-partnerships. The cases selected for the case studies represent a broad range of interests, from big brand dotcoms like Yahoo! and Google to a small manufacturer that has embraced e-business and e-partnership technologies and practices. The case studies are followed by a cross-case analysis of the key issues in relation to the development of e-partnerships. Key successful factors are identified from the successful cases, along with the hard lessons learned from failure.


Sign in / Sign up

Export Citation Format

Share Document