Inexact Methods for Black-Oil Sequential Fully Implicit SFI Scheme

2021 ◽  
Author(s):  
Yifan Zhou ◽  
Jiamin Jiang ◽  
Pavel Tomin

Abstract The sequential fully implicit (SFI) scheme was introduced (Jenny et al. 2006) for solving coupled flow and transport problems. Each time step for SFI consists of an outer loop, in which there are inner Newton loops to implicitly and sequentially solve the pressure and transport sub-problems. In standard SFI, the sub-problems are usually fully solved at each outer iteration. This can result in wasted computations that contribute little towards the coupled solution. The issue is known as ‘over-solving’. Our objective is to minimize the cost while maintain or improve the convergence of SFI by preventing ‘over-solving’. We first developed a framework based on the nonlinear acceleration techniques (Jiang and Tchelepi 2019) to ensure robust outer-loop convergence. We then developed inexact-type methods that prevent ‘over-solving’ and minimize the cost of inner solvers for SFI. The motivation is similar to the inexact Newton method, where the inner (linear) iterations are controlled in a way that the outer (Newton) convergence is not degraded, but the overall computational effort is greatly reduced. We proposed an adaptive strategy that provides relative tolerances based on the convergence rates of the coupled problem. The developed inexact SFI method was tested using numerous simulation studies. We compared different strategies such as fixed relaxations on absolute and relative tolerances for the inner solvers. The test cases included synthetic as well as real-field models with complex flow physics and high heterogeneity. The results show that the basic SFI method is quite inefficient. When the coupling is strong, we observed that the outer convergence is mainly restricted by the initial residuals of the sub-problems. It was observed that the feedback from one inner solver can cause the residual of the other to rebound to a much higher level. Away from a coupled solution, additional accuracy achieved in inner solvers is wasted, contributing to little or no reduction of the overall residual. By comparison, the inexact SFI method adaptively provided the relative tolerances adequate for the sub-problems. We show across a wide range of flow conditions that the inexact SFI can effectively resolve the ‘over-solving’ issue, and thus greatly improve the overall performance. The novel information of this paper includes: 1) we found that for SFI, there is no need for one sub-problem to strive for perfection (‘over-solving’), while the coupled residual remains high because of the other sub-problem; 2) a novel inexact SFI method was developed to prevent ‘over-solving’ and minimize the cost of inner solvers; 3) an adaptive strategy was proposed for relative tolerances based on the convergence rates of the coupled problem; and 4) a novel SFI framework was developed based on the nonlinear acceleration techniques to ensure robust outer-loop convergence.

2021 ◽  
Author(s):  
Jacques Franc ◽  
Olav Møyner ◽  
Hamdi A. Tchelepi

Abstract Sequential Fully Implicit (SFI) schemes have been proposed as an alternative to the Fully Implicit Method (FIM). A significant advantage of SFI is that one can employ scalable strategies to the flow and transport problems. However, the primary disadvantage of using SFI compared with FIM is the fact that the splitting errors induced by the decoupling operator, which separates the pressure from the saturation(s), can lead to serious convergence difficulties of the overall nonlinear problem. Thus, it is important to quantify the coupling strength in an adaptive manner in both space and time. We present criteria that localize the computational cells where the pressure and saturation solutions are tightly coupled. The approach is using terms in the FIM Jacobian matrix, we quantify the sensitivity of the mass and volume-balance equations to changes in the pressure and the saturations. We identify three criteria that provide a measure of the coupling strength across the equations and variables. The standard CFL stability criteria, which are based entirely on the saturation equations, are a subset of the new criteria. Here, the pressure equation is solved using Algebraic MultiGrid (AMG), or a multiscale solver, such as the Multiscale Restricted-Smooth Basis (MsRSB) approach. The transport equations are then solved using a fixed total-velocity. These ‘coupling strength’ criteria are used to identify the cells where the pressure-saturation coupling is strong. The applicability of the derived coupling-strength criteria is tested using several test cases. The first test is using a gravitational immiscible dead-oil lock-exchange under a unit mobility ratio and large differences in density. For this case, the SFI algorithm fails to converge to the fully coupled solution due to the large splitting errors. Introducing a fully coupled solution stage on the local subdomains as an additional correction step restores nonlinear convergence. Detailed analysis of the ‘coupling strength’ criteria indicates that the criteria related to the sensitivity of the mass balance to changes in the pressure and the sensitivity of the volume balance to changes in the saturations are the most important ones to satisfy. Other test cases include an alternate gas-water-gas injection in a top layer of the SPE 10 test case and an injection-production scenario in a three-dimensional reservoir with layered lognormally distributed permeability. We propose novel criteria to estimate the strength of coupling between pressure and saturation. These CFL-like numbers are used to identify the cells that require fully implicit treatment in the nonlinear solution strategy. These criteria can also be used to improve the nonlinear convergence rates of Adaptive Implicit Methods (AIM).


2020 ◽  
Vol 12 (7) ◽  
pp. 2767 ◽  
Author(s):  
Víctor Yepes ◽  
José V. Martí ◽  
José García

The optimization of the cost and CO 2 emissions in earth-retaining walls is of relevance, since these structures are often used in civil engineering. The optimization of costs is essential for the competitiveness of the construction company, and the optimization of emissions is relevant in the environmental impact of construction. To address the optimization, black hole metaheuristics were used, along with a discretization mechanism based on min–max normalization. The stability of the algorithm was evaluated with respect to the solutions obtained; the steel and concrete values obtained in both optimizations were analyzed. Additionally, the geometric variables of the structure were compared. Finally, the results obtained were compared with another algorithm that solved the problem. The results show that there is a trade-off between the use of steel and concrete. The solutions that minimize CO 2 emissions prefer the use of concrete instead of those that optimize the cost. On the other hand, when comparing the geometric variables, it is seen that most remain similar in both optimizations except for the distance between buttresses. When comparing with another algorithm, the results show a good performance in optimization using the black hole algorithm.


2020 ◽  
Vol 3 (1) ◽  
pp. 61
Author(s):  
Kazuhiro Aruga

In this study, two operational methodologies to extract thinned woods were investigated in the Nasunogahara area, Tochigi Prefecture, Japan. Methodology one included manual extraction and light truck transportation. Methodology two included mini-forwarder forwarding and four-ton truck transportation. Furthermore, a newly introduced chipper was investigated. As a result, costs of manual extractions within 10 m and 20 m were JPY942/m3 and JPY1040/m3, respectively. On the other hand, the forwarding cost of the mini-forwarder was JPY499/m3, which was significantly lower than the cost of manual extractions. Transportation costs with light trucks and four-ton trucks were JPY7224/m3 and JPY1298/m3, respectively, with 28 km transportation distances. Chipping operation costs were JPY1036/m3 and JPY1160/m3 with three and two persons, respectively. Finally, the total costs of methodologies one and two from extraction within 20 m to chipping were estimated as JPY9300/m3 and JPY2833/m3, respectively, with 28 km transportation distances and three-person chipping operations (EUR1 = JPY126, as of 12 August 2020).


Games ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 53
Author(s):  
Roberto Rozzi

We consider an evolutionary model of social coordination in a 2 × 2 game where two groups of players prefer to coordinate on different actions. Players can pay a cost to learn their opponent’s group: if they pay it, they can condition their actions concerning the groups. We assess the stability of outcomes in the long run using stochastic stability analysis. We find that three elements matter for the equilibrium selection: the group size, the strength of preferences, and the information’s cost. If the cost is too high, players never learn the group of their opponents in the long run. If one group is stronger in preferences for its favorite action than the other, or its size is sufficiently large compared to the other group, every player plays that group’s favorite action. If both groups are strong enough in preferences, or if none of the groups’ sizes is large enough, players play their favorite actions and miscoordinate in inter-group interactions. Lower levels of the cost favor coordination. Indeed, when the cost is low, in inside-group interactions, players always coordinate on their favorite action, while in inter-group interactions, they coordinate on the favorite action of the group that is stronger in preferences or large enough.


2014 ◽  
Vol 665 ◽  
pp. 643-646
Author(s):  
Ying Liu ◽  
Yan Ye ◽  
Chun Guang Li

Metalearning algorithm learns the base learning algorithm, targeted for improving the performance of the learning system. The incremental delta-bar-delta (IDBD) algorithm is such a metalearning algorithm. On the other hand, sparse algorithms are gaining popularity due to their good performance and wide applications. In this paper, we propose a sparse IDBD algorithm by taking the sparsity of the systems into account. Thenorm penalty is contained in the cost function of the standard IDBD, which is equivalent to adding a zero attractor in the iterations, thus can speed up convergence if the system of interest is indeed sparse. Simulations demonstrate that the proposed algorithm is superior to the competing algorithms in sparse system identification.


2010 ◽  
Vol 26 (2) ◽  
pp. 170-174 ◽  
Author(s):  
Shin Yuh Ang ◽  
Rachel Woo Yin Tan ◽  
Mariko Siyue Koh ◽  
Jeremy Lim

Objectives: Endobronchial ultrasound (EBUS), encompassing endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) and Endobronchial ultrasound transbronchial lung biopsy (EBUS-TBLB) has been proven to be a useful modality in the staging and diagnosis of lung cancer. However, there are limited publications on the cost-effectiveness of EBUS and no economic evaluations relevant to the Singapore setting. An economic evaluation using our hospital's data was used to assess the cost implications of EBUS substituting where clinically appropriate: transthoracic needle aspiration; (TTNA), fluoroscopy-guided transbronchial lung biopsy (TBLB), and mediastinoscopy in the diagnosis and staging of lung cancer.Methods: Relationship between the clinical and economic implications of alternative modalities was modeled using data inputs that were relevant to the Singapore setting. Two decision analytic models were constructed to evaluate the cost of EBUS compared with TTNA, TBLB, and staging mediastinoscopy. Only direct costs were imputed.Results: In the base–case analysis, TTNA was the most economical strategy (SGD3,335 = US$2,403) where clinically suitable for the diagnosis of lung cancer as compared to the other options: TBLB (SGD4,499) and EBUS-TBLB (SGD4,857). On the other hand, EBUS-TBNA resulted in expected cost savings of SGD1,214 per positive staging of lung cancer as compared to mediastinoscopy.Conclusions: The use of EBUS-TBNA could result in cost savings of SGD1,214 per positive staging of lung cancer as compared to mediastinoscopy. Whereas TTNA was the most economical intervention for the diagnosis of lung cancer as compared to the other options, its main limitation lies in its suitability only for peripheral lung lesions and high complication rate.


PEDIATRICS ◽  
1990 ◽  
Vol 86 (2) ◽  
pp. 323-323
Author(s):  
CHUNG-PIN SHEIH ◽  
CHING-YUANG LIN

In Reply.— In our article, we reported on 645 renal abnormalities found in 132 686 school children screened through the use of renal ultrasonography. Of those with renal abnormalities, 50 patients had surgically correctable lesions. The other 595 cases have been examined fully to establish the correct diagnosis and the prevalence of renal abnormalities in school children. However, in this study, the cost to benefit ratio was determined by total expense to number of surgically treatable diseases.


Author(s):  
Mingwen Yang ◽  
Zhiqiang (Eric) Zheng ◽  
Vijay Mookerjee

Online reputation has become a key marketing-mix variable in the digital economy. Our study helps managers decide on the effort they should use to manage online reputation. We consider an online reputation race in which it is important not just to manage the absolute reputation, but also the relative rating. That is, to stay ahead, a firm should try to have ratings that are better than those of its competitors. Our findings are particularly significant for platform owners (such as Expedia or Yelp) to strategically grow their base of participating firms: growing the middle of the market (firms with average ratings) is the best option considering the goals of the platform and the other stakeholders, namely incumbents and consumers. For firms, we find that they should increase their effort when the mean market rating increases. Another key insight for firms is that, sometimes, adversity can come disguised as an opportunity. When an adverse event strikes the industry (such as a reduction in sales margin or an increase in the cost of effort), a firm’s profit can increase if it can manage this event better than its competitors.


2021 ◽  
Author(s):  
Coen Teunissen ◽  
Isabella Voce

This report estimates the cost of pure cybercrime to individuals in Australia in 2019. A survey was administered to a sample of 11,840 adults drawn from two online panels—one using probability sampling and the other non-probability sampling—with the resulting data weighted to better reflect the distribution of the wider Australian population. Thirty-four percent of respondents had experienced some form of pure cybercrime, with 14 percent being victimised in the last 12 months. This is equivalent to nearly 6.7 million Australian adults having ever been the victim of pure cybercrime, and 2.8 million Australians being victimised in the past year. Drawing on these population estimates, the total economic impact of pure cybercrime in 2019 was approximately $3.5b. This encompasses $1.9b in money directly lost by victims, $597m spent dealing with the consequences of victimisation, and $1.4b spent on prevention costs. Victims recovered $389m.


Sign in / Sign up

Export Citation Format

Share Document