Comparison of Various Discretization Schemes for Simulation of Large Field Case Reservoirs Using Unstructured Grids

2021 ◽  
Author(s):  
Samier Pierre ◽  
Raguenel Margaux ◽  
Darche Gilles

Abstract Solving the equations governing multiphase flow in geological formations involves the generation of a mesh that faithfully represents the structure of the porous medium. This challenging mesh generation task can be greatly simplified by the use of unstructured (tetrahedral) grids that conform to the complex geometric features present in the subsurface. However, running a million-cell simulation problem using an unstructured grid on a real, faulted field case remains a challenge for two main reasons. First, the workflow typically used to construct and run the simulation problems has been developed for structured grids and needs to be adapted to the unstructured case. Second, the use of unstructured grids that do not satisfy the K-orthogonality property may require advanced numerical schemes that preserve the accuracy of the results and reduce potential grid orientation effects. These two challenges are at the center of the present paper. We describe in detail the steps of our workflow to prepare and run a large-scale unstructured simulation of a real field case with faults. We perform the simulation using four different discretization schemes, including the cell-centered Two-Point and Multi-Point Flux Approximation (respectively, TPFA and MPFA) schemes, the cell- and vertex-centered Vertex Approximate Gradient (VAG) scheme, and the cell- and face-centered hybrid Mimetic Finite Difference (MFD) scheme. We compare the results in terms of accuracy, robustness, and computational cost to determine which scheme offers the best compromise for the test case considered here.

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Yingjie Wu ◽  
Baokun Liu ◽  
Han Zhang ◽  
Jiong Guo ◽  
Fu Li ◽  
...  

The accurate prediction of the neutronic and thermal-hydraulic coupling system transient behavior is important in nuclear reactor safety analysis, where a large-scale nonlinear coupling system with strong stiffness should be solved efficiently. In order to reduce the stiffness and huge computational cost in the coupling system, the high-performance numerical techniques for solving delayed neutron precursor equation are a key issue. In this work, a new precursor integral method with an exponential approximation is proposed and compared with widely used Taylor approximation-based precursor integral methods. The truncation errors of exponential approximation and Taylor approximation are analyzed and compared. Moreover, a time control technique is put forward which is based on flux exponential approximation. The procedure is tested in a 2D neutron kinetic benchmark and a simplified high-temperature gas-cooled reactor-pebble bed module (HTR-PM) multiphysics problem utilizing the efficient Jacobian-free Newton–Krylov method. Results show that selecting appropriate flux approximation in the precursor integral method can improve the efficiency and precision compared with the traditional method. The computation time is reduced to one-ninth in the HTR-PM model under the same accuracy when applying the exponential integral method with the time adaptive technique.


2010 ◽  
Vol 13 (01) ◽  
pp. 56-71 ◽  
Author(s):  
Yan Chen ◽  
Dean S. Oliver

Summary In this paper, ensemble-based closed-loop optimization is applied to a large-scale SPE benchmark study. The Brugge field, a synthetic reservoir, is designed as a common platform to test different closed-loop reservoir management methods. The problem was designed to mimic real field management scenarios and, as a result, is by far the largest and most complex test case on closed-loop optimization. The Brugge field model consists of nine layers with a total of 44,550 active cells. It has one internal fault and seven rock regions with different relative permeability and capillary pressure functions. There are 20 producers and 10 injectors in the field. Noise corrupted production data are provided monthly. Each well has three different completions that can be controlled independently. The producing life of the reservoir is 30 years, and the objective of optimization is to maximize the net present value (NPV) at the end of 30 years. Because of the complexity of this test case, several advanced techniques are used in order to improve the solution of the ensemble-based closed-loop optimization. First, covariance localization was used to obtain good model updates with a relatively small ensemble of reservoir models. Localization alleviated the effect of spurious correlations and made it possible to incorporate large amounts of data. Second, covariance inflation was used to compensate for the tendency of small ensembles to lose variability too quickly. When covariance inflation was used together with localization, variability in the ensemble was maintained. Third, regularization was also used in the ensemble-based optimization to reduce the effect of spurious correlations and to smooth the optimized control parameters. Fourth, normalized saturations were used in the state vector because different rock regions had different relative permeability endpoint saturations. Finally, the addition of global parameters such as relative permeability curves and initial oil/water contact (IOWC) reduced the tendency for overshoot. The resulting combination of ensemble-based data assimilation and optimization performed very well on the benchmark study, achieving an NPV within 1% of the value obtained by the test organizers with known geology.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 218
Author(s):  
Ala’ Khalifeh ◽  
Khalid A. Darabkh ◽  
Ahmad M. Khasawneh ◽  
Issa Alqaisieh ◽  
Mohammad Salameh ◽  
...  

The advent of various wireless technologies has paved the way for the realization of new infrastructures and applications for smart cities. Wireless Sensor Networks (WSNs) are one of the most important among these technologies. WSNs are widely used in various applications in our daily lives. Due to their cost effectiveness and rapid deployment, WSNs can be used for securing smart cities by providing remote monitoring and sensing for many critical scenarios including hostile environments, battlefields, or areas subject to natural disasters such as earthquakes, volcano eruptions, and floods or to large-scale accidents such as nuclear plants explosions or chemical plumes. The purpose of this paper is to propose a new framework where WSNs are adopted for remote sensing and monitoring in smart city applications. We propose using Unmanned Aerial Vehicles to act as a data mule to offload the sensor nodes and transfer the monitoring data securely to the remote control center for further analysis and decision making. Furthermore, the paper provides insight about implementation challenges in the realization of the proposed framework. In addition, the paper provides an experimental evaluation of the proposed design in outdoor environments, in the presence of different types of obstacles, common to typical outdoor fields. The experimental evaluation revealed several inconsistencies between the performance metrics advertised in the hardware-specific data-sheets. In particular, we found mismatches between the advertised coverage distance and signal strength with our experimental measurements. Therefore, it is crucial that network designers and developers conduct field tests and device performance assessment before designing and implementing the WSN for application in a real field setting.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Peiran Zhang ◽  
Joseph Rufo ◽  
Chuyi Chen ◽  
Jianping Xia ◽  
Zhenhua Tian ◽  
...  

AbstractThe ability to precisely manipulate nano-objects on a large scale can enable the fabrication of materials and devices with tunable optical, electromagnetic, and mechanical properties. However, the dynamic, parallel manipulation of nanoscale colloids and materials remains a significant challenge. Here, we demonstrate acoustoelectronic nanotweezers, which combine the precision and robustness afforded by electronic tweezers with versatility and large-field dynamic control granted by acoustic tweezing techniques, to enable the massively parallel manipulation of sub-100 nm objects with excellent versatility and controllability. Using this approach, we demonstrated the complex patterning of various nanoparticles (e.g., DNAs, exosomes, ~3 nm graphene flakes, ~6 nm quantum dots, ~3.5 nm proteins, and ~1.4 nm dextran), fabricated macroscopic materials with nano-textures, and performed high-resolution, single nanoparticle manipulation. Various nanomanipulation functions, including transportation, concentration, orientation, pattern-overlaying, and sorting, have also been achieved using a simple device configuration. Altogether, acoustoelectronic nanotweezers overcome existing limitations in nano-manipulation and hold great potential for a variety of applications in the fields of electronics, optics, condensed matter physics, metamaterials, and biomedicine.


2021 ◽  
Vol 256 ◽  
pp. 112338
Author(s):  
Jie Zhao ◽  
Ramona Pelich ◽  
Renaud Hostache ◽  
Patrick Matgen ◽  
Wolfgang Wagner ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daiji Ichishima ◽  
Yuya Matsumura

AbstractLarge scale computation by molecular dynamics (MD) method is often challenging or even impractical due to its computational cost, in spite of its wide applications in a variety of fields. Although the recent advancement in parallel computing and introduction of coarse-graining methods have enabled large scale calculations, macroscopic analyses are still not realizable. Here, we present renormalized molecular dynamics (RMD), a renormalization group of MD in thermal equilibrium derived by using the Migdal–Kadanoff approximation. The RMD method improves the computational efficiency drastically while retaining the advantage of MD. The computational efficiency is improved by a factor of $$2^{n(D+1)}$$ 2 n ( D + 1 ) over conventional MD where D is the spatial dimension and n is the number of applied renormalization transforms. We verify RMD by conducting two simulations; melting of an aluminum slab and collision of aluminum spheres. Both problems show that the expectation values of physical quantities are in good agreement after the renormalization, whereas the consumption time is reduced as expected. To observe behavior of RMD near the critical point, the critical exponent of the Lennard-Jones potential is extracted by calculating specific heat on the mesoscale. The critical exponent is obtained as $$\nu =0.63\pm 0.01$$ ν = 0.63 ± 0.01 . In addition, the renormalization group of dissipative particle dynamics (DPD) is derived. Renormalized DPD is equivalent to RMD in isothermal systems under the condition such that Deborah number $$De\ll 1$$ D e ≪ 1 .


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


Author(s):  
Mahdi Esmaily Moghadam ◽  
Yuri Bazilevs ◽  
Tain-Yen Hsia ◽  
Alison Marsden

A closed-loop lumped parameter network (LPN) coupled to a 3D domain is a powerful tool that can be used to model the global dynamics of the circulatory system. Coupling a 0D LPN to a 3D CFD domain is a numerically challenging problem, often associated with instabilities, extra computational cost, and loss of modularity. A computationally efficient finite element framework has been recently proposed that achieves numerical stability without sacrificing modularity [1]. This type of coupling introduces new challenges in the linear algebraic equation solver (LS), producing an strong coupling between flow and pressure that leads to an ill-conditioned tangent matrix. In this paper we exploit this strong coupling to obtain a novel and efficient algorithm for the linear solver (LS). We illustrate the efficiency of this method on several large-scale cardiovascular blood flow simulation problems.


2006 ◽  
Vol 18 (12) ◽  
pp. 2959-2993 ◽  
Author(s):  
Eduardo Ros ◽  
Richard Carrillo ◽  
Eva M. Ortigosa ◽  
Boris Barbour ◽  
Rodrigo Agís

Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.


Author(s):  
David Forbes ◽  
Gary Page ◽  
Martin Passmore ◽  
Adrian Gaylard

This study is an evaluation of the computational methods in reproducing experimental data for a generic sports utility vehicle (SUV) geometry and an assessment on the influence of fixed and rotating wheels for this geometry. Initially, comparisons are made in the wake structure and base pressures between several CFD codes and experimental data. It was shown that steady-state RANS methods are unsuitable for this geometry due to a large scale unsteadiness in the wake caused by separation at the sharp trailing edge and rear wheel wake interactions. unsteady RANS (URANS) offered no improvements in wake prediction despite a significant increase in computational cost. The detached-eddy simulation (DES) and Lattice–Boltzmann methods showed the best agreement with the experimental results in both the wake structure and base pressure, with LBM running in approximately a fifth of the time for DES. The study then continues by analysing the influence of rotating wheels and a moving ground plane over a fixed wheel and ground plane arrangement. The introduction of wheel rotation and a moving ground was shown to increase the base pressure and reduce the drag acting on the vehicle when compared to the fixed case. However, when compared to the experimental standoff case, variations in drag and lift coefficients were minimal but misleading, as significant variations to the surface pressures were present.


Sign in / Sign up

Export Citation Format

Share Document