Linear aspects of tomographic velocity analysis

Geophysics ◽  
1991 ◽  
Vol 56 (4) ◽  
pp. 483-495 ◽  
Author(s):  
C. Stork ◽  
R. W. Clayton

Prestack velocity analysis in areas of complex structure is a coupled migration and transmission inversion problem that can be analyzed from a tomographic perspective. By making as few a priori assumptions about the solution as possible in parameterizing the inverse problem, generalized tomographic velocity analysis is applicable to a wide range of geologic cases. Constraints modify the method to the unique characteristics of each application. The ray trace/traveltime formulation for tomography, as proposed by Bishop et al. (1985), provides a conceptual tool for presenting features that are important to automated prestack velocity analysis in complex structure, such as (1) the coupling of the velocity field to the reflector positions, (2) the nonuniform coverage of the model by the data, (3) the ability to perform a controlled inversion of large matrices over a wide eigenvalue range, and (4) the implementation of constraints in the inversion. These features may impact other automated prestack velocity analysis methods for reflection seismology.

Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCA19-WCA34 ◽  
Author(s):  
Christiaan C. Stolk ◽  
Maarten V. de Hoop ◽  
William W. Symes

Recent analysis and synthetic examples have shown that many prestack depth migration methods produce nonflat image gathers containing spurious events, even when provided with a kinematically correct migration velocity field, if this velocity field is highly refractive. This pathology occurs in all migration methods that produce partial images as independent migrations of data bins. Shot-geophone prestack depth migration is an exception to this pattern: each point in the prestack image volume depends explicitly on all traces within the migration aperture. Using a ray-theoretical analysis, we have found that shot-geophone migration produces focused (subsurface-offset domain) or flat (scattering-angle domain) image gathers, provided there is a curvilinear coordinate system defining pseudodepth with respect to which the rays carrying significant energy do not turn, and that the acquisition coverage is sufficient to determine all such rays. Although the analysis is theoretical and idealized, a synthetic example suggests that its implications remain valid for practical implementations, and that shot-geophone prestack depth migration could be a particularly appropriate tool for velocity analysis in a complex structure.


2015 ◽  
Vol 33 (3) ◽  
pp. 503
Author(s):  
Danian Steinkirch De Oliveira ◽  
Milton José Porsani ◽  
Paulo Eduardo Miranda Cunha

ABSTRACT. We developed a strategy for automatic Semblance panels pick, that uses Genetic Algorithm optimization method. In conjunction with restrictions and penalties set from a priori information it’s obtained as a result a nonlinear fit of time interval velocities, that when converted at root mean square (RMS) velocity, better maximizes the sum of the Common Mid Point (CMP) group, corrected with normal moveout (NMO). Currently, a good imaging of deep reflectors, especially in Brazilian basins, below the salt layer, has proved to be a major challenge. Obtaining a seismic velocity field corresponding to the subsurface geology and resulting in a focused seismic image is the main target of seismic processing. In the last decade, the reflection tomography has established itself as one of the main methods of velocity model construction for seismic data migration. On the other hand the full waveform inversion (FWI), taken forward due to recent advances in computing, become feasible in inversion of 2D and 3D velocity models. Despite the stacking velocity analysis be, among these, the less accurate method for generating velocity fields, it is still used on a large scale by the oil and seismic processing companies, because of its low cost and can provide a good initial velocity field for tomography and FWI.Keywords: Genetic Algorithm, velocity analysis, Semblance.RESUMO. Foi desenvolvida uma estratégia de pick automático dos painéis de Semblance , que usa o método de otimização Algoritmo Genético. Em conjunto com restrições e sanções estabelecidas a partir de uma informação a priori, foi obtido como resultado um ajuste não-linear de velocidades intervalares em tempo, que quando convertidas em velocidade RMS, melhor maximiza a soma do grupo CMP, corrigida de NMO. Atualmente, provou ser um grande desafio a geração de uma boa imagem de refletores profundos, especialmente em bacias brasileiras abaixo da camada de sal. A obtenção de um campo de velocidades sísmica correspondente à geologia do subsolo, resultando em uma imagem sísmica focada é o principal alvo de processamento sísmico. Na última década, a tomografia de reflexão estabeleceu-se como um dos principais métodos de construção de modelo de velocidade de migração de dados sísmicos. Por outro lado, a inversão de onda completa (FWI) tomou a frente, devido aos seus excelentes resultados de inversão de modelos de velocidade 2D e 3D, que se tornaram viáveis somente pelos recentes avanços na computação. Apesar da análise de velocidade de empilhamento ser, entre estes, o método menos preciso para gerar campos de velocidade, ainda é utilizada em larga escala pelas companhias de petróleo e processamento sísmico, por causa do seu baixo custo e por poder proporcionar um bom campo de velocidade inicial para tomografia e FWI.Palavras-chave: Algoritmo Genético, análise de velocidade, Semblance.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Spyridoula Vazou ◽  
Collin A. Webster ◽  
Gregory Stewart ◽  
Priscila Candal ◽  
Cate A. Egan ◽  
...  

Abstract Background/Objective Movement integration (MI) involves infusing physical activity into normal classroom time. A wide range of MI interventions have succeeded in increasing children’s participation in physical activity. However, no previous research has attempted to unpack the various MI intervention approaches. Therefore, this study aimed to systematically review, qualitatively analyze, and develop a typology of MI interventions conducted in primary/elementary school settings. Subjects/Methods Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed to identify published MI interventions. Irrelevant records were removed first by title, then by abstract, and finally by full texts of articles, resulting in 72 studies being retained for qualitative analysis. A deductive approach, using previous MI research as an a priori analytic framework, alongside inductive techniques were used to analyze the data. Results Four types of MI interventions were identified and labeled based on their design: student-driven, teacher-driven, researcher-teacher collaboration, and researcher-driven. Each type was further refined based on the MI strategies (movement breaks, active lessons, other: opening activity, transitions, reward, awareness), the level of intrapersonal and institutional support (training, resources), and the delivery (dose, intensity, type, fidelity). Nearly half of the interventions were researcher-driven, which may undermine the sustainability of MI as a routine practice by teachers in schools. An imbalance is evident on the MI strategies, with transitions, opening and awareness activities, and rewards being limitedly studied. Delivery should be further examined with a strong focus on reporting fidelity. Conclusions There are distinct approaches that are most often employed to promote the use of MI and these approaches may often lack a minimum standard for reporting MI intervention details. This typology may be useful to effectively translate the evidence into practice in real-life settings to better understand and study MI interventions.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Li Li ◽  
Yanping Zhou

Abstract In this work, we consider the density-dependent incompressible inviscid Boussinesq equations in $\mathbb{R}^{N}\ (N\geq 2)$ R N ( N ≥ 2 ) . By using the basic energy method, we first give the a priori estimates of smooth solutions and then get a blow-up criterion. This shows that the maximum norm of the gradient velocity field controls the breakdown of smooth solutions of the density-dependent inviscid Boussinesq equations. Our result extends the known blow-up criteria.


2021 ◽  
pp. 0310057X2097665
Author(s):  
Natasha Abeysekera ◽  
Kirsty A Whitmore ◽  
Ashvini Abeysekera ◽  
George Pang ◽  
Kevin B Laupland

Although a wide range of medical applications for three-dimensional printing technology have been recognised, little has been described about its utility in critical care medicine. The aim of this review was to identify three-dimensional printing applications related to critical care practice. A scoping review of the literature was conducted via a systematic search of three databases. A priori specified themes included airway management, procedural support, and simulation and medical education. The search identified 1544 articles, of which 65 were included. Ranging across many applications, most were published since 2016 in non – critical care discipline-specific journals. Most studies related to the application of three-dimensional printed models of simulation and reported good fidelity; however, several studies reported that the models poorly represented human tissue characteristics. Randomised controlled trials found some models were equivalent to commercial airway-related skills trainers. Several studies relating to the use of three-dimensional printing model simulations for spinal and neuraxial procedures reported a high degree of realism, including ultrasonography applications three-dimensional printing technologies. This scoping review identified several novel applications for three-dimensional printing in critical care medicine. Three-dimensional printing technologies have been under-utilised in critical care and provide opportunities for future research.


1968 ◽  
Vol 90 (1) ◽  
pp. 45-50
Author(s):  
R. G. Fenton

The upper bound of the average ram pressure, based on an assumed radial flow velocity field, is derived for plane strain extrusion. Ram pressures are calculated for a complete range of reduction ratios and die angles, considering a wide range of frictional conditions. Results are compared with upper-bound ram pressures obtained by considering velocity fields other than the radial flow field, and it is shown that for a considerable range of reduction ratios and die angles, the radial flow field yields better upper bounds for the average ram pressure.


2016 ◽  
Vol 12 (S325) ◽  
pp. 145-155
Author(s):  
Fionn Murtagh

AbstractThis work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or ‘photo-z’ problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Kelin Lu ◽  
K. C. Chang ◽  
Rui Zhou

This paper addresses the problem of distributed fusion when the conditional independence assumptions on sensor measurements or local estimates are not met. A new data fusion algorithm called Copula fusion is presented. The proposed method is grounded on Copula statistical modeling and Bayesian analysis. The primary advantage of the Copula-based methodology is that it could reveal the unknown correlation that allows one to build joint probability distributions with potentially arbitrary underlying marginals and a desired intermodal dependence. The proposed fusion algorithm requires no a priori knowledge of communications patterns or network connectivity. The simulation results show that the Copula fusion brings a consistent estimate for a wide range of process noises.


Author(s):  
Vu Tuan

AbstractWe prove that by taking suitable initial distributions only finitely many measurements on the boundary are required to recover uniquely the diffusion coefficient of a one dimensional fractional diffusion equation. If a lower bound on the diffusion coefficient is known a priori then even only two measurements are sufficient. The technique is based on possibility of extracting the full boundary spectral data from special lateral measurements.


Sign in / Sign up

Export Citation Format

Share Document