scholarly journals Robust estimation of vertical symmetry axis models via joint migration inversion: Including multiples in anisotropic parameter estimation

Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. C57-C74 ◽  
Author(s):  
Abdulrahman A. Alshuhail ◽  
Dirk J. Verschuur

Because the earth is predominately anisotropic, the anisotropy of the medium needs to be included in seismic imaging to avoid mispositioning of reflectors and unfocused images. Deriving accurate anisotropic velocities from the seismic reflection measurements is a highly nonlinear and ambiguous process. To mitigate the nonlinearity and trade-offs between parameters, we have included anisotropy in the so-called joint migration inversion (JMI) method, in which we limit ourselves to the case of transverse isotropy with a vertical symmetry axis. The JMI method is based on strictly separating the scattering effects in the data from the propagation effects. The scattering information is encoded in the reflectivity operators, whereas the phase information is encoded in the propagation operators. This strict separation enables the method to be more robust, in that it can appropriately handle a wide range of starting models, even when the differences in traveltimes are more than a half cycle away. The method also uses internal multiples in estimating reflectivities and anisotropic velocities. Including internal multiples in inversion not only reduces the crosstalk in the final image, but it can also reduce the trade-off between the anisotropic parameters because internal multiples usually have more of an imprint of the subsurface parameters compared with primaries. The inverse problem is parameterized in terms of a reflectivity, vertical velocity, horizontal velocity, and a fixed [Formula: see text] value. The method is demonstrated on several synthetic models and a marine data set from the North Sea. Our results indicate that using JMI for anisotropic inversion makes the inversion robust in terms of using highly erroneous initial models. Moreover, internal multiples can contain valuable information on the subsurface parameters, which can help to reduce the trade-off between anisotropic parameters in inversion.

Geophysics ◽  
1999 ◽  
Vol 64 (4) ◽  
pp. 1219-1229 ◽  
Author(s):  
Pedro Contreras ◽  
Vladimir Grechka ◽  
Ilya Tsvankin

The transversely isotropic model with a horizontal symmetry axis (HTI media) has been extensively used in seismological studies of fractured reservoirs. In this paper, a parameter‐estimation technique originally developed by Grechka and Tsvankin for the more general orthorhombic media is applied to horizontal transverse isotropy. Our methodology is based on the inversion of azimuthally‐dependent P-wave normal‐moveout (NMO) velocities from horizontal and dipping reflectors. If the NMO velocity of a given reflection event is plotted in each azimuthal direction, it forms an ellipse determined by three combinations of medium parameters. The NMO ellipse from a horizontal reflector in HTI media can be inverted for the azimuth β of the symmetry axis, the vertical velocity [Formula: see text], and the Thomsen‐type anisotropic parameter δ(V). We describe a technique for obtaining the remaining (for P-waves) anisotropic parameter η(V) (or ε(V)) from the NMO ellipse corresponding to a dipping reflector of arbitrary azimuth. The interval parameters of vertically inhomogeneous HTI media are recovered using the generalized Dix equation that operates with NMO ellipses for horizontal and dipping events. High accuracy of our method is confirmed by inverting a synthetic multiazimuth P-wave data set generated by ray tracing for a layered HTI medium with depth‐varying orientation of the symmetry axis. Although estimation of η(V) can be carried out by the algorithm developed for orthorhombic media, for more stable results the HTI model has to be used from the outset of the inversion procedure. It should be emphasized that P-wave conventional‐spread moveout data provide enough information to distinguish between HTI and lower‐symmetry models. We show that if the medium has the orthorhombic symmetry and is sufficiently different from HTI, the best‐fit HTI model cannot match the NMO ellipses for both a horizontal and a dipping event. The anisotropic coefficients responsible for P-wave moveout can be combined to estimate the crack density and predict whether the cracks are fluid‐filled or dry. A unique feature of the HTI model that distinguishes it from both vertical transverse isotropy and orthorhombic media is that moveout inversion provides not just zero‐dip NMO velocities and anisotropic coefficients, but also the true vertical velocity. As a result, reflection P-wave data acquired over HTI formations can be used to build velocity models in depth and perform anisotropic depth processing.


Geophysics ◽  
2001 ◽  
Vol 66 (3) ◽  
pp. 904-910 ◽  
Author(s):  
Vladimir Grechka ◽  
Andres Pech ◽  
Ilya Tsvankin ◽  
Baoniu Han

Transverse isotropy with a tilted symmetry axis (TTI media) has been recognized as a common feature of shale formations in overthrust areas, such as the Canadian Foothills. Since TTI layers cause serious problems in conventional imaging, it is important to be able to reconstruct the velocity model suitable for anisotropic depth migration. Here, we discuss the results of anisotropic parameter estimation on a physical‐modeling data set. The model represents a simplified version of a typical overthrust section from the Alberta Foothills, with a horizontal reflector overlaid by a bending transversely isotropic layer. Assuming that the TTI layer is homogeneous and the symmetry axis stays perpendicular to its boundaries, we invert P-wave normal‐moveout (NMO) velocities and zero‐offset traveltimes for the symmetry‐direction velocity V0 and the anisotropic parameters ε and δ. The coefficient ε is obtained using the traveltimes of a wave that crosses a dipping TTI block and reflects from the bottom of the model. The inversion for ε is based on analytic expressions for NMO velocity in media with intermediate dipping interfaces. Our estimates of both anisotropic coefficients are close to their actual values. The errors in the inversion, which are associated primarily with the uncertainties in picking the NMO velocities and traveltimes, can be reduced by a straighforward modification of the acquisition geometry. It should be emphasized that the moveout inversion also gives an accurate estimate of the thickness of the TTI layer, thus reconstructing the correct depth scale of the section. Although the physical model used here was relatively simple, our results demonstrate the principal feasibility of anisotropic velocity analysis and imaging in overthrust areas. The main problems in anisotropic processing for TTI models are likely to be caused by the lateral variation of the velocity field and overall structural complexity.


Geophysics ◽  
1995 ◽  
Vol 60 (1) ◽  
pp. 268-284 ◽  
Author(s):  
Ilya Tsvankin

Description of reflection moveout from dipping interfaces is important in developing seismic processing methods for anisotropic media, as well as in the inversion of reflection data. Here, I present a concise analytic expression for normal‐moveout (NMO) velocities valid for a wide range of homogeneous anisotropic models including transverse isotropy with a tilted in‐plane symmetry axis and symmetry planes in orthorhombic media. In transversely isotropic media, NMO velocity for quasi‐P‐waves may deviate substantially from the isotropic cosine‐of‐dip dependence used in conventional constant‐velocity dip‐moveout (DMO) algorithms. However, numerical studies of NMO velocities have revealed no apparent correlation between the conventional measures of anisotropy and errors in the cosine‐of‐dip DMO correction (“DMO errors”). The analytic treatment developed here shows that for transverse isotropy with a vertical symmetry axis, the magnitude of DMO errors is dependent primarily on the difference between Thomsen parameters ε and δ. For the most common case, ε − δ > 0, the cosine‐of‐dip–corrected moveout velocity remains significantly larger than the moveout velocity for a horizontal reflector. DMO errors at a dip of 45 degrees may exceed 20–25 percent, even for weak anisotropy. By comparing analytically derived NMO velocities with moveout velocities calculated on finite spreads, I analyze anisotropy‐induced deviations from hyperbolic moveout for dipping reflectors. For transversely isotropic media with a vertical velocity gradient and typical (positive) values of the difference ε − δ, inhomogeneity tends to reduce (sometimes significantly) the influence of anisotropy on the dip dependence of moveout velocity.


2020 ◽  
Vol 117 (40) ◽  
pp. 24893-24899
Author(s):  
Thomas Kiørboe ◽  
Mridul K. Thomas

Gleaners and exploiters (opportunists) are organisms adapted to feeding in nutritionally poor and rich environments, respectively. A trade-off between these two strategies—a negative relationship between the rate at which organisms can acquire food and ingest it—is a critical assumption in many ecological models. Here, we evaluate evidence for this trade-off across a wide range of heterotrophic eukaryotes from unicellular nanoflagellates to large mammals belonging to both aquatic and terrestrial realms. Using data on the resource acquisition and ingestion rates in >500 species, we find no evidence of a trade-off across species. Instead, there is a positive relationship between maximum clearance rate and maximum ingestion rate. The positive relationship is not a result of lumping together diverse taxa; it holds within all subgroups of organisms we examined as well. Correcting for differences in body mass weakens but does not reverse the positive relationship, so this is not an artifact of size scaling either. Instead, this positive relationship represents a slow–fast gradient in the “pace of life” that overrides the expected gleaner–exploiter trade-off. Other trade-offs must therefore shape ecological processes, and investigating them may provide deeper insights into coexistence, competitive dynamics, and biodiversity patterns in nature. A plausible target for study is the well-documented trade-off between growth rate and predation avoidance, which can also drive the slow–fast gradient we observe here.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. D27-D36 ◽  
Author(s):  
Andrey Bakulin ◽  
Marta Woodward ◽  
Dave Nichols ◽  
Konstantin Osypov ◽  
Olga Zdraveva

Tilted transverse isotropy (TTI) is increasingly recognized as a more geologically plausible description of anisotropy in sedimentary formations than vertical transverse isotropy (VTI). Although model-building approaches for VTI media are well understood, similar approaches for TTI media are in their infancy, even when the symmetry-axis direction is assumed known. We describe a tomographic approach that builds localized anisotropic models by jointly inverting surface-seismic and well data. We present a synthetic data example of anisotropic tomography applied to a layered TTI model with a symmetry-axis tilt of 45 degrees. We demonstrate three scenarios for constraining the solution. In the first scenario, velocity along the symmetry axis is known and tomography inverts for Thomsen’s [Formula: see text] and [Formula: see text] parame-ters. In the second scenario, tomography inverts for [Formula: see text], [Formula: see text], and velocity, using surface-seismic data and vertical check-shot traveltimes. In contrast to the VTI case, both these inversions are nonunique. To combat nonuniqueness, in the third scenario, we supplement check-shot and seismic data with the [Formula: see text] profile from an offset well. This allows recovery of the correct profiles for velocity along the symmetry axis and [Formula: see text]. We conclude that TTI is more ambiguous than VTI for model building. Additional well data or rock-physics assumptions may be required to constrain the tomography and arrive at geologically plausible TTI models. Furthermore, we demonstrate that VTI models with atypical Thomsen parameters can also fit the same joint seismic and check-shot data set. In this case, although imaging with VTI models can focus the TTI data and match vertical event depths, it leads to substantial lateral mispositioning of the reflections.


2015 ◽  
Vol 119 (1211) ◽  
pp. 67-90 ◽  
Author(s):  
F. Ali ◽  
I. Goulos ◽  
V. Pachidis

AbstractThis paper aims to present an integrated multidisciplinary simulation framework, deployed for the comprehensive assessment of combined helicopter–powerplant systems at mission level. Analytical evaluations of existing and conceptual regenerative engine designs are carried out in terms of operational performance and environmental impact. The proposed methodology comprises a wide-range of individual modeling theories applicable to helicopter flight dynamics, gas turbine engine performance as well as a novel, physics-based, stirred reactor model for the rapid estimation of various helicopter emissions species. The overall methodology has been deployed to conduct a preliminary trade-off study for a reference simple cycle and conceptual regenerative twin-engine light helicopter, modeled after the Airbus Helicopters Bo105 configuration, simulated under the representative mission scenarios. Extensive comparisons are carried out and presented for the aforementioned helicopters at both engine and mission level, along with general flight performance charts including the payload-range diagram. The acquired results from the design trade-off study suggest that the conceptual regenerative helicopter can offer significant improvement in the payload-range capability, while simultaneously maintaining the required airworthiness requirements. Furthermore, it has been quantified through the implementation of a representative case study that, while the regenerative configuration can enhance the mission range and payload capabilities of the helicopter, it may have a detrimental effect on the mission emissions inventory, specifically for NOx(Nitrogen Oxides). This may impose a trade-off between the fuel economy and environmental performance of the helicopter. The proposed methodology can effectively be regarded as an enabling technology for the comprehensive assessment of conventional and conceptual helicopter-powerplant systems, in terms of operational performance and environmental impact as well as towards the quantification of their associated trade-offs at mission level.


2018 ◽  
Vol 3 ◽  
pp. 51-70 ◽  
Author(s):  
Thomas Lenormand ◽  
Noémie Harmand ◽  
Romain Gallet

The concept of “cost of resistance” has been very important for decades, for fundamental reasons (theory of adaptation), with a wide range of applications for the genetics and genomics of resistance: resistance to antibiotics, insecticide, herbicide, fungicides; resistance to chemotherapy in cancer research; coevolution between all kinds of parasites and their hosts. This paper reviews this history, including latest developments, shows the interest of the idea but also challenges the usefulness and limits of this widely used concept, based on the most recent development of adaptation theory.  It explains how the concept can be flawed and how this can impede research efforts in the field of resistance at large, including all applied aspects. In particular, it would be clearer to simply measure the fitness effects of mutations across environments and to better distinguish those effects from ‘pleiotropic effects’ of those mutations. Overall, we show how to correct the concept, and how this correction helps to better understand the wealth of data that has accumulated in recent years. The main points are: 1. The concept of «cost of resistance» needs to be carefully used, to avoid misconceptions, false paradox and flawed applications. The recent developments in adaptation theory are useful to clarify this. 2. “Cost of resistance” and pleiotropy have to be distinguished. More than one trait is required to discuss pleiotropy. Resistance evolution must at least involve the modification of one trait. If there is an irreducible trade-off on that trait between environments with and without drug, it creates a fitness effect that is not due to pleiotropy. Pleiotropic effects can, but need not, occur in addition. 3. “Cost of resistance” must depend on the pair of environments considered with and without drug. Hence, there are as many measures of cost as there are environments without drug. If the focal genotype is not well adapted to one focal environment, it is relatively easy to observe “negative” costs of resistance. There is nothing surprising about this, and it does not indicate an absence of trade-off. 4. Environments with drug can differ according to the dose. It may be more informative to measure the possible trade-offs among all doses than to focus exclusively on the fitness contrast between the presence and the absence of drug.


2020 ◽  
Author(s):  
Vincent A. Keenan ◽  
Stephen J. Cornell

AbstractDispersal polymorphism and mutation play significant roles during biological invasions, potentially leading to evolution and complex behaviour such as accelerating or decelerating invasion fronts. However, life history theory predicts that reproductive fitness — another key determinant of invasion dynamics – may be lower for more dispersive strains. Here, we use a mathematical model to show that unexpected invasion dynamics emerge from the combination of heritable dispersal polymorphism, dispersal-fitness trade-offs, and mutation between strains. We show that the invasion dynamics are determined by the trade-off relationship between dispersal and population growth rates of the constituent strains. We find that invasion dynamics can be “anomalous” (i.e. faster than any of the strains in isolation), but that the ultimate invasion speed is determined by the traits of at most two strains. The model is simple but generic, so we expect the predictions to apply to a wide range of ecological, evolutionary or epidemiological invasions.


2021 ◽  
Author(s):  
Hae-Young Kim ◽  
Anna Bershteyn ◽  
Jessica B. McGillen ◽  
R. Scott Braithwaite

AbstractIntroductionNew York City (NYC) was a global epicenter of COVID-19. Vaccines against COVID-19 became available in December 2020 with limited supply, resulting in the need for policies regarding prioritization. The next month, SARS-CoV-2 variants were detected that were more transmissible but still vaccine-susceptible, raising scrutiny of these policies. In particular, prioritization of higher-risk people could prevent more deaths per dose of vaccine administered but could also delay herd immunity if the prioritization introduced bottlenecks that lowered vaccination speed (the number of doses that could be delivered per day). We used mathematical modeling to examine the trade-off between prioritization and the vaccination speed.MethodsA stochastic, discrete-time susceptible-exposed-infected-recovered (SEIR) model with age- and comorbidity-adjusted COVID-19 outcomes (infections, hospitalizations, and deaths by July 1, 2021) was used to examine the trade-off between vaccination speed and whether or not vaccination was prioritized to individuals age 65+ and “essential workers,” defined as including first responders and healthcare, transit, education, and public safety workers. The model was calibrated to COVID-19 hospital admissions, hospital census, ICU census, and deaths in NYC. Vaccination speed was assumed to be 10,000 doses per day starting December 15th, 2020 targeting healthcare workers and nursing home populations, and to subsequently expand at alternative starting times and speeds. We compared COVID-outcomes across alternative expansion starting times (January 15th, January 21st, or February 1st) and speeds (20,000, 30,000, 50,000, 100,000, 150,000, or 200,000 doses per day for the first dose), as well as alternative prioritization options (“yes” versus “no” prioritization of essential workers and people age 65+). Model projections were produced with and without considering the emergence of a SARS-COV-2 variant with 56% greater transmissibility over January and February, 2021.ResultsIn the absence of a COVID-19 vaccine, the emergence of the more transmissible variant would triple the peak in infections, hospitalizations, and deaths and more than double cumulative infections, hospitalizations, and deaths. To offset the harm from the more transmissible variant would require reaching a vaccination speed of at least 100,000 doses per day by January 15th or 150,000 per day by January 21st. Prioritizing people ages 65+ and essential workers increased the number of lives saved per vaccine dose delivered: with the emergence of a more transmissible variant, 8,000 deaths could be averted by delivering 115,000 doses per day without prioritization or 71,000 doses per day with prioritization. If prioritization were to cause a bottleneck in vaccination speed, more lives would be saved with prioritization only if the bottleneck reduced vaccination speed by less than one-third of the maximum vaccine delivery capacity. These trade-offs between vaccination speed and prioritization were robust over a wide range of delivery capacity.ConclusionsThe emergence of a more transmissible variant of SARS-CoV-2 has the potential to triple the 2021 epidemic peak and more than double the 2021 COVID-19 burden in NYC. Vaccination could only offset the harm of the more transmissible variant if high speed were achieved in mid-to late January. Prioritization of COVID-19 vaccines to higher-risk populations saves more lives only if it does not create an excessive vaccine delivery bottleneck.


2010 ◽  
Vol 67 (1) ◽  
pp. 209-216 ◽  
Author(s):  
Eli P. Fenichel ◽  
Gretchen J.A. Hansen

Fisheries managers must make trade-offs between competing management actions; however, the inherent trade-offs associated with information gathering are seldom explicitly considered. Incorporating economics into management decisions at the outset can aid managers in explicitly considering the trade-off between collecting more information to guide management and taking management actions. We use control of the invasive sea lamprey ( Petromyzon marinus ) in the Laurentian Great Lakes to illustrate how budget constraints shape this trade-off. Economic theory is used to frame previous empirical work showing that reducing the allocation of resources to conducting assessment, and thereby freeing resources for treatment, would result in a greater reduction of sea lamprey populations — the overarching management objective. The optimal allocation of resources between assessment and control depends on the total budget, the relative cost of each management activity, the marginal reduction in uncertainty associated with increased assessment, and the marginal effectiveness of increased treatment. Formal incorporation of prior information can change the optimal allocation of resources. The approach presented here is generally applicable to a wide range of fishery management and research questions.


Sign in / Sign up

Export Citation Format

Share Document