Richardson's Iteration With Dynamic Parameters and the SIP Incomplete Factorization for the Solution of Linear Systems of Equations

1981 ◽  
Vol 21 (06) ◽  
pp. 699-708
Author(s):  
Paul E. Saylor

Abstract Reservoir simulation yields a system of linear algebraic equations, Ap=q, that may be solved by Richardson's iterative method, p(k+1)=p(k)+tkr(k), where r(k)=q-Ap(k) is the residual and t0, . . . tk are acceleration parameters. The incomplete factorization, Ka, of the strongly implicit procedure (SIP) yields an improvement of Richardson's method, p(k+1)=p(k)+tkKa−1r(k). Parameter a originates from SIP. The product of the L and U factors produced by SIP gives Ka=LU. The best values of the tk acceleration parameters may be computed dynamically by an efficient algorithm; the best value of a must be found by trial and error, which is not hard for only one value. The advantages of the method are (1) it always converges, (2) with the exception of the a parameter, parameters are computed dynamically, and (3) convergence is efficient for test problems characterized by heterogeneities and transmissibilities varying over 10 orders of magnitude. The test problems originate from field data and were suggested by industry personnel as particularly difficult. Dynamic computation of parameters is also a feature of the conjugate gradient method, but the iteration described here does not require A to be symmetric. Matrix Ka−1 A must be such that the real part of each eigenvalue is nonnegative, or the real part of each is nonpositive, but not both positive and negative. It is in this sense that the method always converges. This condition is satisfied by many simulator-generated matrices. The method also may be applied to matrices arising from the simulation of other processes, such as chemical flooding. Introduction The solution of a linear algebraic system, Ap=q, is a basic, costly step in the numerical simulation of a hydrocarbon reservoir. Many current solution methods are impractical for large linear systems arising from three-dimensional simulations or from reservoirs characterized by widely varying and discontinuous physical parameters. An iterative solution is described with these two main advantages:it is efficient for difficult problems andthe selection of iteration parameters is straightforward. The method is Richardson's method applied to a preconditioned linear system. Matrix A may be symmetric or nonsymmetric. In the simulation of multiphase flow, it is usually nonsymmetric. Convergence behavior is shown for four examples. Two of these, Examples 3 and 4, were provided by an industry laboratory (Exxon Production Research Co.), and were suggested by personnel as especially difficult to solve; SIP failed to converge and only the diagonal method1 was effective. Convergence of Richardson's method is compared with the diagonal method using data from a laboratory run. The other two examples are: Example 1, a matrix not difficult to solve, generated from field data, and Example 2, a variant of a difficult matrix described by Stone.2 The easy matrix of Example 1 is included to show the performance of Richardson's method (with preconditioning) on a simple problem.

1995 ◽  
Vol 30 (6) ◽  
pp. 841-860 ◽  
Author(s):  
Julius S. Bendat ◽  
Robert N. Coppolino ◽  
Paul A. Palo

2019 ◽  
Vol 11 (2) ◽  
pp. 92
Author(s):  
Josip Soln

The complex particle energy, appearing in this article, with the suggestive choices of physical parameters,is transformed simply into the real particle energy. Then with the bicubic equation limiting particle velocity formalism, one evaluates the three particle limiting velocities, $c_{1},$ $c_{2}$\ and $% c_{3},$ (primary, obscure and normal) in terms of the ordinary particle velocity, $v$, and derived positive $m_{+}=m\succ 0$ \ and negative \ $% m_{-}=-m\prec 0$ \ \ particle masses with $m_{+}^{2}=m_{-}^{2}=$ $m^{2}$. In general, the important quantity in solving this bicubic equation is the real square value $\ z^{2}(m)$ of the congruent parameter, $z(m)$, that connects real or complex value of particle energy, $E,$ and the real or complex value of particle velocity squared, $v^{2}$, $2Ez(m)=3\sqrt{3}mv^{2}$% . With real $z^{2}(m)$ one determines the real value of discriminant, $D,$ of the bicubic equation, and they together influence the connection between $% E$ and $v^{2}.$ Hence, when $z^{2}\prec 1$ and \ $D\prec 0$ one has simply that $E\gg mv^{2}$. However,with $D\succeq 0$ and $z^{2}\succeq 1$ , both $E$ and $v^{2}$ may become complex simultaneously through connecting relation $% E=3\sqrt{3}mv^{2}/2z(m)$, with their real values satisfying \ Re $E\succcurlyeq m\left( \func{Re}v^{2}\right) $, keeping, however $z^{2}$ the same and real. In this article, this new situation with $D\succeq 0$ is discussed in detail.by looking as how to adjust the particle\ parameters to have $\func{Im% }E=0$ with implication that automatically also Im$v^{2}=0.$.In fact, after having adjusted the particle\ parameters successfully this way, one simply writes Re$E=E$ and Re$v^{2}=v^{2}$. \ \ This way one arrives at that the limiting velocities satisfy $c_{1}=c_{2}$\ $\#$ $c_{3}$, which shows the degeneracy of $c_{1}$ and $c_{2}$ as the same numerical limiting velocity for two particles. This degeneracy $c_{1}$ =$c_{2}$ is simply due to the absence of $\func{Im}E$. It would start disappearing with just an infinitesimal $\func{Im}E$. Now,while $c_{1}=c_{2}$ is real, $c_{3}$ is imaginary and all of them associated with the same particle energy, $E$. With these velocity values the congruent parameter becomes quantized as $% z(m_{\pm })=3\sqrt{3}m_{\pm }v^{2}/2E=\pm 1$ which, with the bicubic discriminant $D=0$ value, implies the quantization also of the particle mass, $m,$ into $m_{\pm }=\pm m$ values . The numerically equal energies,from $E=\func{Re}E$ can be expressed as $\ \ \ \ \ \ \ \ \ \ \ $$E(c_{1,2}($ $m_{\pm }))=E(c_{3}(m_{\pm }))$ either directly in terms of $% c_{1}(m_{\pm })=c_{2}(m_{\pm })$ and $c_{3}(m_{\pm })$ or also indirectly in terms of particle velocity, $v$, as well as in the Lorentzian fixed forms with $v^{2}\#$ $c_{1}^{2},$ $c_{2}^{2}$\ or $c_{3}^{2}$ assuring different from zero mass, $m$ $\#$ $0$. At the end, with here developed formalism, one calculates for a light sterile neutrino dark matter particle, the energies associated with $m_{\pm} $ masses and $c_{1,2}$and $c_{3}$ limiting velocities.


2016 ◽  
Vol 104 ◽  
pp. 141-157 ◽  
Author(s):  
Yiming Bu ◽  
Bruno Carpentieri ◽  
Zhaoli Shen ◽  
Ting-Zhu Huang

1999 ◽  
Author(s):  
Alejandro Zaleta-Aguilar ◽  
Armando Gallegos-Muñoz ◽  
Antonio Valero ◽  
Javier Royo

Abstract This work builds on the previous work on “Exergoeconomics Fuel-Impact” developed by Torres (1991), Valero et. al. (1994), and compares it with respect to the Performance Test Code (PTC’s) actually applied in power plants (ASME/ANSI PTC-6, 1970). With the objective of proposing procedures for PTC’s in power plant’s based on an exergoeconomics point of view. It was necessary to validate the Fuel-Impact Theories, and improve the conceptual expression, in order to make it more applicable to the real conditions in the plant. By mean of a program using simulation and field data, it was possible to validate and compare the procedures. This work has analyzed an example of a 110 MW Power Plant, in which all the exergetic costs have been determined for the steam cycle, and a fuel-impact analysis has been developed for the steam turbines at the design and off-design conditions. The result of the fuel-impact analysis is compared with respect to a classical procedure related in ASME-PTC-6.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 904 ◽  
Author(s):  
Afshin Babaei ◽  
Hossein Jafari ◽  
S. Banihashemi

A spectral collocation approach is constructed to solve a class of time-fractional stochastic heat equations (TFSHEs) driven by Brownian motion. Stochastic differential equations with additive noise have an important role in explaining some symmetry phenomena such as symmetry breaking in molecular vibrations. Finding the exact solution of such equations is difficult in many cases. Thus, a collocation method based on sixth-kind Chebyshev polynomials (SKCPs) is introduced to assess their numerical solutions. This collocation approach reduces the considered problem to a system of linear algebraic equations. The convergence and error analysis of the suggested scheme are investigated. In the end, numerical results and the order of convergence are evaluated for some numerical test problems to illustrate the efficiency and robustness of the presented method.


1996 ◽  
Vol 23 ◽  
pp. 382-387 ◽  
Author(s):  
I. Hansen ◽  
R. Greve

An approach to simulate the present Antarctic ice sheet with reaped to its thermomechanical behaviour and the resulting features is made with the three-dimensional polythermal ice-sheet model designed by Greve and Hutter. It treats zones of cold and temperate ice as different materials with their own properties and dynamics. This is important because an underlying layer of temperate ice can influence the ice sheet as a whole, e.g. the cold ice may slide upon the less viscous binary ice water mixture. Measurements indicate that the geothermal heat flux below the Antarctic ice sheet appears to be remarkably higher than the standard value of 42 m W m−2 that is usually applied for Precambrian shields in ice-sheet modelling. Since the extent of temperate ice at the base is highly dependent on this heat input from the lithosphere, an adequate choice is crucial for realistic simulations. We shall present a series of steady-state results with varied geothermal heat flux and demonstrate that the real ice-sheet topography can be reproduced fairly well with a value in the range 50–60 m W m−2. Thus, the physical parameters of ice (especially the enhancement factor in Glen’s flow law) as used by Greve (1995) for polythermal Greenland ice-sheet simulations can be adopted without any change. The remaining disagreements may he explained by the neglected influence of the ice shelves, the rather coarse horizontal resolution (100 km), the steady-state assumption and possible shortcomings in the parameterization of the surface mass balance.


Author(s):  
Dumitru Serghiuta ◽  
John Tholammakkil ◽  
Naj Hammouda ◽  
Anthony O’Hagan

This paper discusses a framework for designing artificial test problems, evaluation criteria, and two of the benchmark tests developed under a research project initiated by the Canadian Nuclear Safety Commission to investigate the approaches for qualification of tolerance limit methods and algorithms proposed for application in optimization of CANDU reactor protection trip setpoints for aged conditions. A significant component of this investigation has been the development of a series of benchmark problems of gradually increased complexity, from simple “theoretical” problems up to complex problems closer to the real application. The first benchmark problem discussed in this paper is a simplified scalar problem which does not involve extremal, maximum or minimum, operations, typically encountered in the real applications. The second benchmark is a high dimensional, but still simple, problem for statistical inference of maximum channel power during normal operation. Bayesian algorithms have been developed for each benchmark problem to provide an independent way of constructing tolerance limits from the same data and allow assessing how well different methods make use of those data and, depending on the type of application, evaluating what the level of “conservatism” is. The Bayesian method is not, however, used as a reference method, or “gold” standard, but simply as an independent review method. The approach and the tests developed can be used as a starting point for developing a generic suite (generic in the sense of potentially applying whatever the proposed statistical method) of empirical studies, with clear criteria for passing those tests. Some lessons learned, in particular concerning the need to assure the completeness of the description of the application and the role of completeness of input information, are also discussed. It is concluded that a formal process, which should include extended and detailed benchmark tests, but targeted to the context of the particular application and aimed at identifying the domain of validity of the proposed tolerance limit method and algorithm, is needed and might provide the necessary confidence in the proposed statistical procedure.


Sign in / Sign up

Export Citation Format

Share Document