Large Scale Placement For Multilateral Wells Using Network Optimization

2021 ◽  
Author(s):  
Ghazi D. AL-Qahtani ◽  
Noah Berlow

Abstract Multilateral wells are an evolution of horizontal wells in which several wellbore branches radiate from the main borehole. In the last two decades, multilateral wells have been increasingly utilized in producing hydrocarbon reservoirs. The main advantage of using such technology against conventional and single-bore wells comes from the additional access to reservoir rock by maximizing the reservoir contact with fewer resources. Today, multilateral wells are rapidly becoming more complex in both designs and architecture (i.e., extended reach wells, maximum reservoir contact, and extreme reservoir contact wells). Certain multilateral design templates prevail in the industry, such as fork and fishbone types, which tend to be populated throughout the reservoir of interest with no significant changes to the original architecture and, therefore, may not fully realize the reservoir's potential. Placement of optimal multilateral wells is a multivariable problem, which is a function of determining the best well locations and trajectories in a hydrocarbon reservoir with the ultimate objectives of maximizing productivity and recovery. The placement of the multilateral wells can be subject to many constraints such as the number of wells required, maximum length limits, and overall economics. This paper introduces a novel technology for placement of multilateral wells in hydrocarbon reservoirs utilizing a transshipment network optimization approach. This method generates scenarios of multiple wells with different designs honoring the most favorable completion points in a reservoir. In addition, the algorithm was developed to find the most favorable locations and trajectories for the multilateral wells in both local and global terms. A partitioning algorithm is uniquely utilized to reduce the computational cost of the process. The proposed method will not only create different multilateral designs; it will justify the trajectories of every borehole section generated. The innovative method is capable of constructing hundreds of multilateral wells with design variations in large-scale reservoirs. As the complexity of the reservoirs (e.g., active forces that influence fluid mobility) and heterogeneity dictate variability in performance at different area of the reservoir, multilateral wells should be constructed to capture the most productive zones. The new method also allows different levels of branching for the laterals (i.e., laterals can emanate from the motherbore, from other laterals or from subsequent branches). These features set the stage for a new generation of multilateral wells to achieve the most effective reservoir contact.

2019 ◽  
Vol 9 (18) ◽  
pp. 3758 ◽  
Author(s):  
Xiang Li ◽  
Xiaojie Wang ◽  
Chengli Zhao ◽  
Xue Zhang ◽  
Dongyun Yi

Locating the source that undergoes a diffusion-like process is a fundamental and challenging problem in complex network, which can help inhibit the outbreak of epidemics among humans, suppress the spread of rumors on the Internet, prevent cascading failures of power grids, etc. However, our ability to accurately locate the diffusion source is strictly limited by incomplete information of nodes and inevitable randomness of diffusion process. In this paper, we propose an efficient optimization approach via maximum likelihood estimation to locate the diffusion source in complex networks with limited observations. By modeling the informed times of the observers, we derive an optimal source localization solution for arbitrary trees and then extend it to general graphs via proper approximations. The numerical analyses on synthetic networks and real networks all indicate that our method is superior to several benchmark methods in terms of the average localization accuracy, high-precision localization and approximate area localization. In addition, low computational cost enables our method to be widely applied for the source localization problem in large-scale networks. We believe that our work can provide valuable insights on the interplay between information diffusion and source localization in complex networks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Marcelo V. W. Zibetti ◽  
Gabor T. Herman ◽  
Ravinder R. Regatte

AbstractIn this study, a fast data-driven optimization approach, named bias-accelerated subset selection (BASS), is proposed for learning efficacious sampling patterns (SPs) with the purpose of reducing scan time in large-dimensional parallel MRI. BASS is applicable when Cartesian fully-sampled k-space measurements of specific anatomy are available for training and the reconstruction method for undersampled measurements is specified; such information is used to define the efficacy of any SP for recovering the values at the non-sampled k-space points. BASS produces a sequence of SPs with the aim of finding one of a specified size with (near) optimal efficacy. BASS was tested with five reconstruction methods for parallel MRI based on low-rankness and sparsity that allow a free choice of the SP. Three datasets were used for testing, two of high-resolution brain images ($$\text {T}_{2}$$ T 2 -weighted images and, respectively, $$\text {T}_{1\rho }$$ T 1 ρ -weighted images) and another of knee images for quantitative mapping of the cartilage. The proposed approach has low computational cost and fast convergence; in the tested cases it obtained SPs up to 50 times faster than the currently best greedy approach. Reconstruction quality increased by up to 45% over that provided by variable density and Poisson disk SPs, for the same scan time. Optionally, the scan time can be nearly halved without loss of reconstruction quality. Quantitative MRI and prospective accelerated MRI results show improvements. Compared with greedy approaches, BASS rapidly learns effective SPs for various reconstruction methods, using larger SPs and larger datasets; enabling better selection of sampling-reconstruction pairs for specific MRI problems.


2020 ◽  
Author(s):  
Lungwani Muungo

The purpose of this review is to evaluate progress inmolecular epidemiology over the past 24 years in canceretiology and prevention to draw lessons for futureresearch incorporating the new generation of biomarkers.Molecular epidemiology was introduced inthe study of cancer in the early 1980s, with theexpectation that it would help overcome some majorlimitations of epidemiology and facilitate cancerprevention. The expectation was that biomarkerswould improve exposure assessment, document earlychanges preceding disease, and identify subgroupsin the population with greater susceptibility to cancer,thereby increasing the ability of epidemiologic studiesto identify causes and elucidate mechanisms incarcinogenesis. The first generation of biomarkers hasindeed contributed to our understanding of riskandsusceptibility related largely to genotoxic carcinogens.Consequently, interventions and policy changes havebeen mounted to reduce riskfrom several importantenvironmental carcinogens. Several new and promisingbiomarkers are now becoming available for epidemiologicstudies, thanks to the development of highthroughputtechnologies and theoretical advances inbiology. These include toxicogenomics, alterations ingene methylation and gene expression, proteomics, andmetabonomics, which allow large-scale studies, includingdiscovery-oriented as well as hypothesis-testinginvestigations. However, most of these newer biomarkershave not been adequately validated, and theirrole in the causal paradigm is not clear. There is a needfor their systematic validation using principles andcriteria established over the past several decades inmolecular cancer epidemiology.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daiji Ichishima ◽  
Yuya Matsumura

AbstractLarge scale computation by molecular dynamics (MD) method is often challenging or even impractical due to its computational cost, in spite of its wide applications in a variety of fields. Although the recent advancement in parallel computing and introduction of coarse-graining methods have enabled large scale calculations, macroscopic analyses are still not realizable. Here, we present renormalized molecular dynamics (RMD), a renormalization group of MD in thermal equilibrium derived by using the Migdal–Kadanoff approximation. The RMD method improves the computational efficiency drastically while retaining the advantage of MD. The computational efficiency is improved by a factor of $$2^{n(D+1)}$$ 2 n ( D + 1 ) over conventional MD where D is the spatial dimension and n is the number of applied renormalization transforms. We verify RMD by conducting two simulations; melting of an aluminum slab and collision of aluminum spheres. Both problems show that the expectation values of physical quantities are in good agreement after the renormalization, whereas the consumption time is reduced as expected. To observe behavior of RMD near the critical point, the critical exponent of the Lennard-Jones potential is extracted by calculating specific heat on the mesoscale. The critical exponent is obtained as $$\nu =0.63\pm 0.01$$ ν = 0.63 ± 0.01 . In addition, the renormalization group of dissipative particle dynamics (DPD) is derived. Renormalized DPD is equivalent to RMD in isothermal systems under the condition such that Deborah number $$De\ll 1$$ D e ≪ 1 .


Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1513 ◽  
Author(s):  
Naser Golsanami ◽  
Xuepeng Zhang ◽  
Weichao Yan ◽  
Linjun Yu ◽  
Huaimin Dong ◽  
...  

Seismic data and nuclear magnetic resonance (NMR) data are two of the highly trustable kinds of information in hydrocarbon reservoir engineering. Reservoir fluids influence the elastic wave velocity and also determine the NMR response of the reservoir. The current study investigates different pore types, i.e., micro, meso, and macropores’ contribution to the elastic wave velocity using the laboratory NMR and elastic experiments on coal core samples under different fluid saturations. Once a meaningful relationship was observed in the lab, the idea was applied in the field scale and the NMR transverse relaxation time (T2) curves were synthesized artificially. This task was done by dividing the area under the T2 curve into eight porosity bins and estimating each bin’s value from the seismic attributes using neural networks (NN). Moreover, the functionality of two statistical ensembles, i.e., Bag and LSBoost, was investigated as an alternative tool to conventional estimation techniques of the petrophysical characteristics; and the results were compared with those from a deep learning network. Herein, NMR permeability was used as the estimation target and porosity was used as a benchmark to assess the reliability of the models. The final results indicated that by using the incremental porosity under the T2 curve, this curve could be synthesized using the seismic attributes. The results also proved the functionality of the selected statistical ensembles as reliable tools in the petrophysical characterization of the hydrocarbon reservoirs.


Author(s):  
Zahra Homayouni ◽  
Mir Saman Pishvaee ◽  
Hamed Jahani ◽  
Dmitry Ivanov

AbstractAdoption of carbon regulation mechanisms facilitates an evolution toward green and sustainable supply chains followed by an increased complexity. Through the development and usage of a multi-choice goal programming model solved by an improved algorithm, this article investigates sustainability strategies for carbon regulations mechanisms. We first propose a sustainable logistics model that considers assorted vehicle types and gas emissions involved with product transportation. We then construct a bi-objective model that minimizes total cost as the first objective function and follows environmental considerations in the second one. With our novel robust-heuristic optimization approach, we seek to support the decision-makers in comparison and selection of carbon emission policies in supply chains in complex settings with assorted vehicle types, demand and economic uncertainty. We deploy our model in a case-study to evaluate and analyse two carbon reduction policies, i.e., carbon-tax and cap-and-trade policies. The results demonstrate that our robust-heuristic methodology can efficiently deal with demand and economic uncertainty, especially in large-scale problems. Our findings suggest that governmental incentives for a cap-and-trade policy would be more effective for supply chains in lowering pollution by investing in cleaner technologies and adopting greener practices.


2017 ◽  
Vol 139 (5) ◽  
Author(s):  
Sara Benyakhlef ◽  
Ahmed Al Mers ◽  
Ossama Merroun ◽  
Abdelfattah Bouatem ◽  
Hamid Ajdad ◽  
...  

Reducing levelized electricity costs of concentrated solar power (CSP) plants can be of great potential in accelerating the market penetration of these sustainable technologies. Linear Fresnel reflectors (LFRs) are one of these CSP technologies that may potentially contribute to such cost reduction. However, due to very little previous research, LFRs are considered as a low efficiency technology. In this type of solar collectors, there is a variety of design approaches when it comes to optimizing such systems. The present paper aims to tackle a new research axis based on variability study of heliostat curvature as an approach for optimizing small and large-scale LFRs. Numerical investigations based on a ray tracing model have demonstrated that LFR constructors should adopt a uniform curvature for small-scale LFRs and a variable curvature per row for large-scale LFRs. Better optical performances were obtained for LFRs regarding these adopted curvature types. An optimization approach based on the use of uniform heliostat curvature for small-scale LFRs has led to a system cost reduction by means of reducing its receiver surface and height.


Sign in / Sign up

Export Citation Format

Share Document