Signal-to-noise estimates of time-reverse images

Geophysics ◽  
2011 ◽  
Vol 76 (2) ◽  
pp. MA1-MA10 ◽  
Author(s):  
Ben Witten ◽  
Brad Artman

Locating subsurface sources from passive seismic recordings is difficult when attempted with data that have no observable arrivals and/or a low signal-to-noise ratio (S/N). Energy can be focused at its source using time-reversal techniques. However, when a focus cannot be matched to a particular event, it can be difficult to distinguish true focusing from artifacts. Artificial focusing can arise from numerous causes, including noise contamination, acquisition geometry, and velocity model effects. We present a method that reduces the ambiguity of the results by creating an estimate of the (S/N) in the image domain and defining a statistical confidence threshold for features in the images. To do so, time-reverse imaging techniques are implemented on both recorded data and a noise model. In the data domain, the noise model approximates the energy of local noise sources. After imaging, the result also captures the effects of acquisition geometry and the velocity model. The signal image is then divided by the noise image to produce an estimate of the (S/N). The distribution of image (S/N) values due to purely stochastic noise provides a means by which to calculate a confidence threshold. This threshold is used to set the minimum displayed value of images to a statistically significant limit. Two-dimensional synthetic examples show the effectiveness of this technique under varying amounts of noise and despite challenging velocity models. Using this method, we collocate anomalous low-frequency energy content, measured over oil reservoirs in Africa and Europe, with the subsurface location of the productive intervals through 2D and 3D implementations.


2018 ◽  
Author(s):  
Claudia Werner ◽  
Erik H. Saenger

Abstract. Time Reverse Imaging (TRI) is evolving into a standard technique for localizing and characterizing seismic events. In recent years, TRI has been applied to a wide range of applications from the lab scale over the field scale up to the global scale. No identification of events and their onset times is necessary when localizing events with TRI. Therefore, it is especially suited for localizing quasi-simultaneous events and events with a low signal-to-noise ratio. However, in contrast to more regularly applied localization methods, the prerequisites for applying TRI are not sufficiently known. To investigate the significance of station distributions, complex velocity models and signal-to-noise ratios for the localization quality, numerous simulations were performed using a finite difference code to propagate elastic waves through three-dimensional models. Synthetic seismograms were reversed in time and re-inserted into the model. The time-reversed wavefield backpropagates through the model and, in theory, focuses at the source location. This focusing was visualized using imaging conditions. Additionally, artificial focusing spots were removed with an illumination map specific to the setup. Successful localizations were sorted into four categories depending on their reliability. Consequently, individual simulation setups could be evaluated by their ability to produce reliable localizations. Optimal inter-station distances, minimum apertures, relations between array and source location, heterogeneities of inter-station distances and total number of stations were investigated for different source depth as well as source types. Additionally, the quality of the localization was analysed when using a complex velocity model or a low signal-to-noise ratio. Finally, an array in Southern California was investigated for its ability to localize seismic events in specific target depths while using the actual velocity model for that region. In addition, the success rate with recorded data was estimated. Knowledge about the prerequisites for using TRI enables the estimation of success rates for a given problem. Furthermore, it reduces the time needed for adjusting stations to achieve more reliable localizations and provides a foundation for designing arrays for applying TRI.



Geophysics ◽  
1997 ◽  
Vol 62 (4) ◽  
pp. 1226-1237 ◽  
Author(s):  
Irina Apostoiu‐Marin ◽  
Andreas Ehinger

Prestack depth migration can be used in the velocity model estimation process if one succeeds in interpreting depth events obtained with erroneous velocity models. The interpretational difficulty arises from the fact that migration with erroneous velocity does not yield the geologically correct reflector geometries and that individual migrated images suffer from poor signal‐to‐noise ratio. Moreover, migrated events may be of considerable complexity and thus hard to identify. In this paper, we examine the influence of wrong velocity models on the output of prestack depth migration in the case of straight reflector and point diffractor data in homogeneous media. To avoid obscuring migration results by artifacts (“smiles”), we use a geometrical technique for modeling and migration yielding a point‐to‐point map from time‐domain data to depth‐domain data. We discover that strong deformation of migrated events may occur even in situations of simple structures and small velocity errors. From a kinematical point of view, we compare the results of common‐shot and common‐offset migration. and we find that common‐offset migration with erroneous velocity models yields less severe image distortion than common‐shot migration. However, for any kind of migration, it is important to use the entire cube of migrated data to consistently interpret in the prestack depth‐migrated domain.



Geophysics ◽  
2021 ◽  
pp. 1-54
Author(s):  
Milad Bader ◽  
Robert G. Clapp ◽  
Biondo Biondi

Low-frequency data below 5 Hz are essential to the convergence of full-waveform inversion towards a useful solution. They help build the velocity model low wavenumbers and reduce the risk of cycle-skipping. In marine environments, low-frequency data are characterized by a low signal-to-noise ratio and can lead to erroneous models when inverted, especially if the noise contains coherent components. Often field data are high-pass filtered before any processing step, sacrificing weak but essential signal for full-waveform inversion. We propose to denoise the low-frequency data using prediction-error filters that we estimate from a high-frequency component with a high signal-to-noise ratio. The constructed filter captures the multi-dimensional spectrum of the high-frequency signal. We expand the filter's axes in the time-space domain to compress its spectrum towards the low frequencies and wavenumbers. The expanded filter becomes a predictor of the target low-frequency signal, and we incorporate it in a minimization scheme to attenuate noise. To account for data non-stationarity while retaining the simplicity of stationary filters, we divide the data into non-overlapping patches and linearly interpolate stationary filters at each data sample. We apply our method to synthetic stationary and non-stationary data, and we show it improves the full-waveform inversion results initialized at 2.5 Hz using the Marmousi model. We also demonstrate that the denoising attenuates non-stationary shear energy recorded by the vertical component of ocean-bottom nodes.



2012 ◽  
Vol 108 (10) ◽  
pp. 2837-2845 ◽  
Author(s):  
Go Ashida ◽  
Kazuo Funabiki ◽  
Paula T. Kuokkanen ◽  
Richard Kempter ◽  
Catherine E. Carr

Owls use interaural time differences (ITDs) to locate a sound source. They compute ITD in a specialized neural circuit that consists of axonal delay lines from the cochlear nucleus magnocellularis (NM) and coincidence detectors in the nucleus laminaris (NL). Recent physiological recordings have shown that tonal stimuli induce oscillatory membrane potentials in NL neurons (Funabiki K, Ashida G, Konishi M. J Neurosci 31: 15245–15256, 2011). The amplitude of these oscillations varies with ITD and is strongly correlated to the firing rate. The oscillation, termed the sound analog potential, has the same frequency as the stimulus tone and is presumed to originate from phase-locked synaptic inputs from NM fibers. To investigate how these oscillatory membrane potentials are generated, we applied recently developed signal-to-noise ratio (SNR) analysis techniques (Kuokkanen PT, Wagner H, Ashida G, Carr CE, Kempter R. J Neurophysiol 104: 2274–2290, 2010) to the intracellular waveforms obtained in vivo. Our theoretical prediction of the band-limited SNRs agreed with experimental data for mid- to high-frequency (>2 kHz) NL neurons. For low-frequency (≤2 kHz) NL neurons, however, measured SNRs were lower than theoretical predictions. These results suggest that the number of independent NM fibers converging onto each NL neuron and/or the population-averaged degree of phase-locking of the NM fibers could be significantly smaller in the low-frequency NL region than estimated for higher best-frequency NL.



2018 ◽  
Author(s):  
Meyer Gabriel ◽  
Caponcy Julien ◽  
Paul A. Salin ◽  
Comte Jean-Christophe

AbstractLocal field potential (LFP) recording is a very useful electrophysiological method to study brain processes. However, this method is criticized for recording low frequency activity in a large area of extracellular space potentially contaminated by distal activity. Here, we theoretically and experimentally compare ground-referenced (RR) with differential recordings (DR). We analyze electrical activity in the rat cortex with these two methods. Compared with RR, DR reveals the importance of local phasic oscillatory activities and their coherence between cortical areas. Finally, we show that DR provides a more faithful assessment of functional connectivity caused by an increase in the signal to noise ratio, and of the delay in the propagation of information between two cortical structures.



2016 ◽  
Vol 7 (3) ◽  
pp. 1-9 ◽  
Author(s):  
Sahar A. El-Rahman Ismail ◽  
Dalal Al Makhdhub ◽  
Amal A. Al Qahtani ◽  
Ghadah A. Al Shabanat ◽  
Nouf M. Omair ◽  
...  

We live in an information era where sensitive information extracted from data mining systems is vulnerable to exploitation. Privacy preserving data mining aims to prevent the discovery of sensitive information. Information hiding systems provide excellent privacy and confidentiality, where securing confidential communications in public channels can be achieved using steganography. A cover media are exploited using steganography techniques where they hide the payload's existence within appropriate multimedia carriers. This paper aims to study steganography techniques in spatial and frequency domains, and then analyzes the performance of Discrete Cosine Transform (DCT) based steganography using the low frequency and the middle frequency to compare their performance using Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE). The experimental results show that middle frequency has the larger message capacity and best performance.



Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. R63-R75 ◽  
Author(s):  
Gregory Ely ◽  
Alison Malcolm ◽  
Oleg V. Poliannikov

Seismic imaging is conventionally performed using noisy data and a presumably inexact velocity model. Uncertainties in the input parameters propagate directly into the final image and therefore into any quantity of interest, or qualitative interpretation, obtained from the image. We considered the problem of uncertainty quantification in velocity building and seismic imaging using Bayesian inference. Using a reduced velocity model, a fast field expansion method for simulating recorded wavefields, and the adaptive Metropolis-Hastings algorithm, we efficiently quantify velocity model uncertainty by generating multiple models consistent with low-frequency full-waveform data. A second application of Bayesian inversion to any seismic reflections present in the recorded data reconstructs the corresponding structures’ position along with its associated uncertainty. Our analysis complements rather than replaces traditional imaging because it allows us to assess the reliability of visible image features and to take that into account in subsequent interpretations.



Wave Motion ◽  
2018 ◽  
Vol 79 ◽  
pp. 23-43 ◽  
Author(s):  
I. Petromichelakis ◽  
C. Tsogka ◽  
C.G. Panagiotopoulos


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5704
Author(s):  
Zhenhu Jin ◽  
Yupeng Wang ◽  
Kosuke Fujiwara ◽  
Mikihiko Oogane ◽  
Yasuo Ando

Thanks to their high magnetoresistance and integration capability, magnetic tunnel junction-based magnetoresistive sensors are widely utilized to detect weak, low-frequency magnetic fields in a variety of applications. The low detectivity of MTJs is necessary to obtain a high signal-to-noise ratio when detecting small variations in magnetic fields. We fabricated serial MTJ-based sensors with various junction area and free-layer electrode aspect ratios. Our investigation showed that their sensitivity and noise power are affected by the MTJ geometry due to the variation in the magnetic shape anisotropy. Their MR curves demonstrated a decrease in sensitivity with an increase in the aspect ratio of the free-layer electrode, and their noise properties showed that MTJs with larger junction areas exhibit lower noise spectral density in the low-frequency region. All of the sensors were able detect a small AC magnetic field (Hrms = 0.3 Oe at 23 Hz). Among the MTJ sensors we examined, the sensor with a square-free layer and large junction area exhibited a high signal-to-noise ratio (4792 ± 646). These results suggest that MTJ geometrical characteristics play a critical role in enhancing the detectivity of MTJ-based sensors.



Sign in / Sign up

Export Citation Format

Share Document