A Multi-Level Non-Linear Solver for Complex Well Modelling

2021 ◽  
Author(s):  
Zhen Chen ◽  
Tareq Shaalan ◽  
Ali Dogru

Abstract Complex well model has proved to be important for capturing the full physics in wellbore, including pressure losses, multiphase effects, and advanced device modelling. Numerical instability may be observed especially when the well is produced at a low rate from a highly productive multi-phase zone. In this paper, a new multi-level nonlinear solver is presented in a state-of-the-art parallel complex wellbore model for addressing some difficult numerical convergence problems. A sequential two-level nonlinear solver is implemented, where the inner solver is used to address the convergence in the constraint rate equation, and then the entire complex network is solved using an outer solver. Finally, the wellbore model is coupled with the grid solution explicitly, sequentially, or implicitly. This novel formulation is robust enough to greatly improve the numerical stability due to the lagging in the computation of mixture density in wellbore constraint rate equation and the variation in the fluid composition over Newton iterations in network nonlinear solver. The numerical challenge in the complex well model and the improvement of performance with the new nonlinear solver are demonstrated using reservoir simulation. Models with complex wells running into convergence problems are constructed and simulated. With this novel nonlinear solver, simulation gives much more reliable results on well productions without numerical oscillations and computational cost is much less.

Author(s):  
Yizhen Chen ◽  
Haifeng Hu

Most existing segmentation networks are built upon a “ U -shaped” encoder–decoder structure, where the multi-level features extracted by the encoder are gradually aggregated by the decoder. Although this structure has been proven to be effective in improving segmentation performance, there are two main drawbacks. On the one hand, the introduction of low-level features brings a significant increase in calculations without an obvious performance gain. On the other hand, general strategies of feature aggregation such as addition and concatenation fuse features without considering the usefulness of each feature vector, which mixes the useful information with massive noises. In this article, we abandon the traditional “ U -shaped” architecture and propose Y-Net, a dual-branch joint network for accurate semantic segmentation. Specifically, it only aggregates the high-level features with low-resolution and utilizes the global context guidance generated by the first branch to refine the second branch. The dual branches are effectively connected through a Semantic Enhancing Module, which can be regarded as the combination of spatial attention and channel attention. We also design a novel Channel-Selective Decoder (CSD) to adaptively integrate features from different receptive fields by assigning specific channelwise weights, where the weights are input-dependent. Our Y-Net is capable of breaking through the limit of singe-branch network and attaining higher performance with less computational cost than “ U -shaped” structure. The proposed CSD can better integrate useful information and suppress interference noises. Comprehensive experiments are carried out on three public datasets to evaluate the effectiveness of our method. Eventually, our Y-Net achieves state-of-the-art performance on PASCAL VOC 2012, PASCAL Person-Part, and ADE20K dataset without pre-training on extra datasets.


Author(s):  
Mehdi Tarkian ◽  
Johan Persson ◽  
Johan O¨lvander ◽  
Xiaolong Feng

This paper presents a multidisciplinary design optimization framework for modular industrial robots. An automated design framework, containing physics based high fidelity models for dynamic simulation and structural strength analyses are utilized and seamlessly integrated with a geometry model. The proposed framework utilizes well-established methods such as metamodeling and multi-level optimization in order to speed up the design optimization process. The contribution of the paper is to show that by applying a merger of well-established methods, the computational cost can be cut significantly, enabling search for truly novel concepts.


1999 ◽  
Vol 103 (1028) ◽  
pp. 481-485 ◽  
Author(s):  
G. M. Robinson ◽  
A. J. Keane

Abstract This paper discusses how the inevitable limitations of computing power available to designers has restricted adoption of optimisation as an essential design tool. It is argued that this situation will continue until optimisation algorithms are developed which utilise the range of available analysis methods in a manner more like human designers. The concept of multi-level algorithms is introduced and a case made for their adoption as the way forward. The issues to be addressed in the development of multi-level algorithms are highlighted. The paper goes on to discuss a system developed at Southampton University to act as a test bed for multi-level algorithms deployed on a realistic design task. The Southampton University multi-level wing design environment integrates drag estimation algorithms ranging from an empirical code to an Euler CFD code, covering a 150,000 fold difference in computational cost. A simple multi-level optimisation of a civil transport aircraft wing is presented.


2016 ◽  
Author(s):  
Matthew J. McGrath ◽  
James Ryder ◽  
Bernard Pinty ◽  
Juliane Otto ◽  
Kim Naudts ◽  
...  

Abstract. In order to better simulate heat fluxes over multilayer ecosystems, in particular tropical forests and savannahs, the next generation of Earth system models will likely include vertically-resolved vegetation structure and multi-level energy budgets. We present here a multi-level radiation transfer scheme which is capable of being used in conjunction with such methods. It is based on a previously established scheme which encapsulates the three dimensional nature of canopies, through the use of a domain-averaged structure factor, referred to here as the effective leaf area index. The fluxes are tracked throughout the canopy in an iterative fashion until they escape into the atmosphere or are absorbed by the canopy or soil; this approach explicitly includes multiple scattering between the canopy layers. A series of tests show that the results from the two-layer case are in acceptable agreement with those from the single layer, although the computational cost is necessarily increased due to the iterations. The ten-layer case is less precise, but still provides results to within an acceptable range. This new approach allows for the calculation of radiation transfer in vertically resolved vegetation canopies simulated in global circulation models.


Author(s):  
Recep M. Gorguluarslan ◽  
Seung-Kyum Choi ◽  
Hae-jin Choi

A methodology is proposed for uncertainty quantification to accurately predict the mechanical response of lattice structures fabricated by additive manufacturing. Effective structural properties of the lattice structures are characterized using a multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process, high resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling process facilitates obtaining of the homogenized strut properties of the lattice structure to reduce the computational cost of the detailed simulation model for the lattice structure. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also utilized to assess the predictive capability of the stochastic upscaling method used at strut level and lattice structure level. In comparison with physical compression tests, the proposed methodology of linking the uncertainty quantification with multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure by accounting for the uncertainties introduced by the additive manufacturing process.


2014 ◽  
Vol 624 ◽  
pp. 43-50
Author(s):  
Giovanni Castellazzi ◽  
Cristina Gentilini ◽  
Susanna Casacci ◽  
Angelo Di Tommaso ◽  
Mathias J. Monaldi

Sequentially Linear Analysis (SLA) is an alternative method that avoids convergence problems derived from the use of classic nonlinear finite element analysis. Instead of using incremental iterative schemes (arc-length control, Newton-Raphson), SLA is a sequential procedure made by a series of linear analysis, able to capture nonlinear behavior, reducing Young Modulus, according to saw-tooth constitutive relation. In this paper an investigation above all the aspects of the methods will be presented using a new element suitable for the SLA: accuracy of the solutions and computational cost, i.e. the time needed to get to satisfactory conclusions of the analysis. In order to test the efficiency of the proposed element, numerical results hailed from different brittle problems, such as glass beam and an ideal masonry tower, are used.


Author(s):  
Brage S. Kristoffersen ◽  
Mathias C. Bellout ◽  
Thiago L. Silva ◽  
Carl F. Berg

AbstractA data-driven automatic well planner procedure is implemented to develop complex well trajectories by efficiently adapting to near-well reservoir properties and geometry. The procedure draws inspiration from geosteering drilling operations, where modern logging-while-drilling tools enable the adjustment of well trajectories during drilling. Analogously, the proposed procedure develops well trajectories based on a selected geology-based fitness measure using an artificial neural network as the decision maker in a virtual sequential drilling process within a reservoir model. While neural networks have seen extensive use in other areas of reservoir management, to the best of our knowledge, this work is the first to apply neural networks on well trajectory design within reservoir models. Importantly, both the input data generation used to train the network and the actual trajectory design operations conducted by the trained network are efficient calculations, since these rely solely on geometric and initial properties of the reservoir, and thus do not require additional simulations. Therefore, the main advantage over traditional methods is the highly articulated well trajectories adapted to reservoir properties using a low-order well representation. Well trajectories generated in a realistic reservoir by the automatic well planner are qualitatively and quantitatively compared to trajectories generated by a differential evolution algorithm. Results show that the resulting trajectories improve productivity compared to straight line well trajectories, both for channelized and geometrically complex reservoirs. Moreover, the overall productivity with the resulting trajectories is comparable to well solutions obtained using differential evolution, but at a much lower computational cost.


SPE Journal ◽  
2014 ◽  
Vol 19 (03) ◽  
pp. 381-389 ◽  
Author(s):  
Zhitao Li ◽  
Mojdeh Delshad

Summary In applications of polymer flood for enhanced oil recovery (EOR), polymer injectivity is of great concern because project economics is sensitive to injection rates. In-situ non-Newtonian polymer rheology is the most crucial factor that affects polymer injectivity. There are several ongoing polymer-injection field tests in which the field injectivities differ significantly from the simulation forecasts. We have developed an analytical model to more accurately calculate and predict polymer injectivity during the field projects to help with optimum injection strategies. Significant viscosity variations during polymer flood occur in the vicinities of wellbores where velocities are high. As the size of a wellblock increases, velocity smears, and thus polymer injectivity is erroneously calculated. In the University of Texas Chemical Flooding Simulator (UTCHEM), the solution was to use an effective radius to capture the “grid effect,” which is empirical and impractical for large-scale field simulations with several hundred wells. Another approach is to use local grid refinement near wells, but this adds to the computational cost and limits the size of the problem. An attractive alternative to previous approaches is to extend the Peaceman well model (Peaceman 1983) to non-Newtonian polymer solutions. The polymer rheological model and its implementation in UTCHEM were validated by simulating single-phase polymer injectivity in coreflood experiments. On the basis of the Peaceman well model and UTCHEM polymer rheological models covering both shear-thinning and shear-thickening polymers, an analytical polymer-injectivity model was developed. The analytical model was validated by comparing results of different gridblock sizes and radial numerical simulation. We also tested a field case by comparing results of a fine-grid simulation and its upscaled coarse-grid model. A pilot-scale polymer flood was simulated to demonstrate the capability of the proposed analytical model. The model successfully captured polymer injectivity in all these cases with no need to introduce empirical parameters.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Thomas F. Schranghamer ◽  
Aaryan Oberoi ◽  
Saptarshi Das

Abstract Memristive crossbar architectures are evolving as powerful in-memory computing engines for artificial neural networks. However, the limited number of non-volatile conductance states offered by state-of-the-art memristors is a concern for their hardware implementation since trained weights must be rounded to the nearest conductance states, introducing error which can significantly limit inference accuracy. Moreover, the incapability of precise weight updates can lead to convergence problems and slowdown of on-chip training. In this article, we circumvent these challenges by introducing graphene-based multi-level (>16) and non-volatile memristive synapses with arbitrarily programmable conductance states. We also show desirable retention and programming endurance. Finally, we demonstrate that graphene memristors enable weight assignment based on k-means clustering, which offers greater computing accuracy when compared with uniform weight quantization for vector matrix multiplication, an essential component for any artificial neural network.


2009 ◽  
Vol 1 (4) ◽  
pp. 331-337 ◽  
Author(s):  
Amir Geranmayeh ◽  
Wolfgang Ackermann ◽  
Thomas Weiland

A fast, yet unconditionally stable, solution of time-domain electric field integral equations (TD EFIE) pertinent to the scattering analysis of uniformly meshed and/or periodic conducting structures is introduced. A one-dimensional discrete fast Fourier transform (FFT)-based algorithm is proffered to expedite the calculation of the recursive spatial convolution products of the Toeplitz–block–Toeplitz retarded interaction matrices in a new marching-without-time-variable scheme. Additional saving owing to the system periodicity is concatenated with the Toeplitz properties due to the uniform discretization in multi-level sense. The total computational cost and storage requirements of the proposed method scale as O(Nt2Nslog Ns) and O(Nt Ns), respectively, as opposed to O(Nt2Ns2) and O(NtNs2) for classical marching-on-in-order methods, where Nt and Ns are the number of temporal and spatial unknowns, respectively. Simulation results for arrays of plate-like and cylindrical scatterers demonstrate the accuracy and efficiency of the technique.


Sign in / Sign up

Export Citation Format

Share Document