Estimation of polarization and slowness in mixed wavefields

Geophysics ◽  
1992 ◽  
Vol 57 (6) ◽  
pp. 805-814 ◽  
Author(s):  
Woon Hyun Cho ◽  
Terry W. Spencer

A new algorithm is developed for estimating the moveout velocities and polarization states in mixed wavefields recorded on multicomponent array data in the presence of random noise. The algorithm is applicable to a spatial and temporal data window in which more than two events are present. Three fundamental attributes of the waves are determined: polarization angle, apparent slowness, and the change in amplitude between adjacent detectors. In implementing the method, it is assumed that data is recorded at equispaced geophones located in a spatial window in which the three parameters are constant. Robustness is achieved by averaging the transfer matrix over all combinations of the subarrays that have the same transfer matrix. Application of a least‐squares criterion reduces the mathematics to an eigenvalue problem. The eigenvalues are complex, and their magnitude determines the amplitude change factor. The phase is a linear function of frequency with slope that determines the vertical slowness. The eigenvectors are the polarizations. The input data consists of the cross‐power spectra between subarrays that contain the same number of elements and are shifted by zero or one geophone separation. Examples illustrate the application of the algorithm to synthetic data. Numerical test results show that the performance of the method is not sensitive either to the time overlap between events or to the degree of similarity between waveforms.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ermanno Cordelli ◽  
Paolo Soda ◽  
Giulio Iannello

Abstract Background Biological phenomena usually evolves over time and recent advances in high-throughput microscopy have made possible to collect multiple 3D images over time, generating $$3D+t$$ 3 D + t (or 4D) datasets. To extract useful information there is the need to extract spatial and temporal data on the particles that are in the images, but particle tracking and feature extraction need some kind of assistance. Results This manuscript introduces our new freely downloadable toolbox, the Visual4DTracker. It is a MATLAB package implementing several useful functionalities to navigate, analyse and proof-read the track of each particle detected in any $$3D+t$$ 3 D + t stack. Furthermore, it allows users to proof-read and to evaluate the traces with respect to a given gold standard. The Visual4DTracker toolbox permits the users to visualize and save all the generated results through a user-friendly graphical user interface. This tool has been successfully used in three applicative examples. The first processes synthetic data to show all the software functionalities. The second shows how to process a 4D image stack showing the time-lapse growth of Drosophila cells in an embryo. The third example presents the quantitative analysis of insulin granules in living beta-cells, showing that such particles have two main dynamics that coexist inside the cells. Conclusions Visual4DTracker is a software package for MATLAB to visualize, handle and manually track $$3D+t$$ 3 D + t stacks of microscopy images containing objects such cells, granules, etc.. With its unique set of functions, it remarkably permits the user to analyze and proof-read 4D data in a friendly 3D fashion. The tool is freely available at https://drive.google.com/drive/folders/19AEn0TqP-2B8Z10kOavEAopTUxsKUV73?usp=sharing


2021 ◽  
Vol 13 (15) ◽  
pp. 2967
Author(s):  
Nicola Acito ◽  
Marco Diani ◽  
Gregorio Procissi ◽  
Giovanni Corsini

Atmospheric compensation (AC) allows the retrieval of the reflectance from the measured at-sensor radiance and is a fundamental and critical task for the quantitative exploitation of hyperspectral data. Recently, a learning-based (LB) approach, named LBAC, has been proposed for the AC of airborne hyperspectral data in the visible and near-infrared (VNIR) spectral range. LBAC makes use of a parametric regression function whose parameters are learned by a strategy based on synthetic data that accounts for (1) a physics-based model for the radiative transfer, (2) the variability of the surface reflectance spectra, and (3) the effects of random noise and spectral miscalibration errors. In this work we extend LBAC with respect to two different aspects: (1) the platform for data acquisition and (2) the spectral range covered by the sensor. Particularly, we propose the extension of LBAC to spaceborne hyperspectral sensors operating in the VNIR and short-wave infrared (SWIR) portion of the electromagnetic spectrum. We specifically refer to the sensor of the PRISMA (PRecursore IperSpettrale della Missione Applicativa) mission, and the recent Earth Observation mission of the Italian Space Agency that offers a great opportunity to improve the knowledge on the scientific and commercial applications of spaceborne hyperspectral data. In addition, we introduce a curve fitting-based procedure for the estimation of column water vapor content of the atmosphere that directly exploits the reflectance data provided by LBAC. Results obtained on four different PRISMA hyperspectral images are presented and discussed.


Author(s):  
Robin E Upham ◽  
Michael L Brown ◽  
Lee Whittaker

Abstract We investigate whether a Gaussian likelihood is sufficient to obtain accurate parameter constraints from a Euclid-like combined tomographic power spectrum analysis of weak lensing, galaxy clustering and their cross-correlation. Testing its performance on the full sky against the Wishart distribution, which is the exact likelihood under the assumption of Gaussian fields, we find that the Gaussian likelihood returns accurate parameter constraints. This accuracy is robust to the choices made in the likelihood analysis, including the choice of fiducial cosmology, the range of scales included, and the random noise level. We extend our results to the cut sky by evaluating the additional non-Gaussianity of the joint cut-sky likelihood in both its marginal distributions and dependence structure. We find that the cut-sky likelihood is more non-Gaussian than the full-sky likelihood, but at a level insufficient to introduce significant inaccuracy into parameter constraints obtained using the Gaussian likelihood. Our results should not be affected by the assumption of Gaussian fields, as this approximation only becomes inaccurate on small scales, which in turn corresponds to the limit in which any non-Gaussianity of the likelihood becomes negligible. We nevertheless compare against N-body weak lensing simulations and find no evidence of significant additional non-Gaussianity in the likelihood. Our results indicate that a Gaussian likelihood will be sufficient for robust parameter constraints with power spectra from Stage IV weak lensing surveys.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


2018 ◽  
Vol 4 (1) ◽  
pp. 43
Author(s):  
Heri Sugito ◽  
Ketut Sofjan Firdausi

This research was conducted for evaluation of contamination of pig fat on vegetable cooking oil using transmission polarization method. The sample used is palm oil that has been contaminated with chicken oil and pork oil, with variations of chicken oil and pork oil content. The light source used is a green laser with a wavelength of 532 ± 10 nm. Measurements are made by observing the change in the transmission polarization angle that occurs when no external electric field is provided and by external electric field generated from two copper plates given a voltage of 0-6 kV. Test results show that palm oil contaminated with pig oil has the greatest change in polarization angle compared to pure palm oil and palm oil that has been contaminated with chicken oil. This is because the content of saturated fatty acids in pig oil is greater than pure palm oil and chicken oil. With these results, the transmission polarization method is expected to become a method for the evaluation of halal of cooking oil.Keywords: Transmission Polarization, Electrooptics, Cooking Oil, Impurities of Lard, Halal


2020 ◽  
Vol 10 (4) ◽  
pp. 339-348
Author(s):  
Mahmoud Saleh ◽  
Ádám Nagy ◽  
Endre Kovács

This paper is the second part of a paper-series in which we create and examine new numerical methods for solving the heat conduction equation. Now we present numerical test results of the new algorithms which have been constructed using the known, but non-conventional UPFD and odd-even hopscotch methods in Part 1. Here all studied systems have one space dimension and the physical properties of the heat conducting media are uniform. We also examine different possibilities of treating heat sources.


1957 ◽  
Vol 1 (02) ◽  
pp. 27-55
Author(s):  
John P. Breslin

It is demonstrated in this paper2 that the deepwater wave drag of a hydrofoil of finite span can be found directly from the theory developed largely for ship hydrodynamics by Havelock and others. The wave drag is then studied at high Froude numbers and from the observed behavior the induced drag of the hydrofoil can be deduced from existing aerodynamic formulas. Evaluation of the resulting formulas is effected for two arbitrary load distributions and a comparison with some model test results is made. A practical approximation which gives the influence of gravity over a range of high Froude numbers is found and from this one can deduce a Froude number beyond which the effects of gravity may be ignored. It is also shown that an expression for the waves at some distance aft of the hydrofoil can be deduced from the general formulas developed for ship hydrodynamics. A discussion of the wave pattern is given with particular emphasis on the centerline profile at high Froude numbers and a contrast is pointed out in regard to the results of the two-dimensional theory for the hydrofoil waves and wave resistance.


2019 ◽  
Vol 10 (1) ◽  
pp. 73 ◽  
Author(s):  
Einar Agletdinov ◽  
Dmitry Merson ◽  
Alexei Vinogradov

A novel methodology is proposed to enhance the reliability of detection of low amplitude transients in a noisy time series. Such time series often arise in a wide range of practical situations where different sensors are used for condition monitoring of mechanical systems, integrity assessment of industrial facilities and/or microseismicity studies. In all these cases, the early and reliable detection of possible damage is of paramount importance and is practically limited by detectability of transient signals on the background of random noise. The proposed triggering algorithm is based on a logarithmic derivative of the power spectral density function. It was tested on the synthetic data, which mimics the actual ultrasonic acoustic emission signal recorded continuously with different signal-to-noise ratios (SNR). Considerable advantages of the proposed method over established fixed amplitude threshold and STA/LTA (Short Time Average / Long Time Average) techniques are demonstrated in comparative tests.


Geophysics ◽  
1988 ◽  
Vol 53 (5) ◽  
pp. 707-720 ◽  
Author(s):  
Dave Deming ◽  
David S. Chapman

The present day temperature field in a sedimentary basin is a constraint on the maturation of hydro‐carbons; this temperature field may be estimated by inverting corrected bottom‐hole temperature (BHT) data. Thirty‐two BHTs from the Pineview oil field are corrected for drilling disturbances by a Horner plot and inverted for the geothermal gradient in nine formations. Both least‐squares [Formula: see text] norm and uniform [Formula: see text] norm inversions are used; the [Formula: see text] norm is found to be more robust for the Pineview data. The inversion removes random error from the corrected BHT data by partitioning scatter between noise associated with the BHT measurement and correction processes and local variations in the geothermal gradient. Three‐hundred thermal‐conductivity and density measurements on drill cuttings are used, together with formation density logs, to estimate the in situ thermal conductivity of six of the nine formations. The thermal‐conductivity estimates are used in a finite‐element model to evaluate 2-D conductive heat refraction and, for a series of inversions of synthetic data, to assess the influence of systematic and random noise on the inversion results. A temperature‐anomaly map illustrates that a temperature field calculated by a forward application of the inversion results has less error than any single corrected BHT. Mean background heat flow at Pineview is found to be [Formula: see text] (±13 percent), but is locally higher [Formula: see text] due to heat refraction. The BHT inversion (1) is limited by systematic noise or model error, (2) achieves excellent resolution of a temperature field although resolution of individual formation gradients may be poor, and (3) generally cannot detect lateral variations in heat flow unless thermal‐conductivity structure is constrained.


Sign in / Sign up

Export Citation Format

Share Document