High‐resolution velocity and attenuation logs from long‐spaced sonic data

Geophysics ◽  
1991 ◽  
Vol 56 (7) ◽  
pp. 1071-1080 ◽  
Author(s):  
Mark Sams

A long‐spaced sonic survey may be thought of as a special case of ray theoretical tomographic imaging. With such an approach estimates of borehole properties at a resolution of 6 inches (0.15 m) have been obtained by inversion compared with a resolution of 2 ft (0.6 m) from standard borehole‐compensated techniques (BHC). The inversion scheme employs the conjugate gradient technique which is fast and efficient. Unlike BHC, the method compensates for variable refraction angles and provides estimates of errors in the measurements. Results from synthetic data show that these factors greatly improve the imaging of the properties of a finely layered medium, though amplitude decay and coupling are less well defined than velocity and mud traveltime. Results from real data confirm the superior quality of logs from inversion. Furthermore, they indicate that measured amplitudes can be dominated by errors that cause deterioration of BHC estimates of amplitude decay and coupling.

2021 ◽  
Vol 15 (4) ◽  
pp. 1-20
Author(s):  
Georg Steinbuss ◽  
Klemens Böhm

Benchmarking unsupervised outlier detection is difficult. Outliers are rare, and existing benchmark data contains outliers with various and unknown characteristics. Fully synthetic data usually consists of outliers and regular instances with clear characteristics and thus allows for a more meaningful evaluation of detection methods in principle. Nonetheless, there have only been few attempts to include synthetic data in benchmarks for outlier detection. This might be due to the imprecise notion of outliers or to the difficulty to arrive at a good coverage of different domains with synthetic data. In this work, we propose a generic process for the generation of datasets for such benchmarking. The core idea is to reconstruct regular instances from existing real-world benchmark data while generating outliers so that they exhibit insightful characteristics. We propose and describe a generic process for the benchmarking of unsupervised outlier detection, as sketched so far. We then describe three instantiations of this generic process that generate outliers with specific characteristics, like local outliers. To validate our process, we perform a benchmark with state-of-the-art detection methods and carry out experiments to study the quality of data reconstructed in this way. Next to showcasing the workflow, this confirms the usefulness of our proposed process. In particular, our process yields regular instances close to the ones from real data. Summing up, we propose and validate a new and practical process for the benchmarking of unsupervised outlier detection.


Author(s):  
Hoon Kim ◽  
Kangwook Lee ◽  
Gyeongjo Hwang ◽  
Changho Suh

Developing a computer vision-based algorithm for identifying dangerous vehicles requires a large amount of labeled accident data, which is difficult to collect in the real world. To tackle this challenge, we first develop a synthetic data generator built on top of a driving simulator. We then observe that the synthetic labels that are generated based on simulation results are very noisy, resulting in poor classification performance. In order to improve the quality of synthetic labels, we propose a new label adaptation technique that first extracts internal states of vehicles from the underlying driving simulator, and then refines labels by predicting future paths of vehicles based on a well-studied motion model. Via real-data experiments, we show that our dangerous vehicle classifier can reduce the missed detection rate by at least 18.5% compared with those trained with real data when time-to-collision is between 1.6s and 1.8s.


Author(s):  
E. Colin-Koeniguer ◽  
N. Trouve ◽  
Y. Yamaguchi ◽  
Y. Huang ◽  
L. Ferro-Famil ◽  
...  

AbstractThe experimental result reported in this chapter review the application of (high resolution) Synthetic Aperture Radar (SAR) data to extract valuable information for monitoring urban environments in space and time. Full polarimetry is particularly useful for classification, as it allows the detection of built-up areas and to discriminate among their different types exploiting the variation of the polarimetric backscatter with the orientation, shape, and distribution of buildings and houses, and street patterns. On the other hand, polarimetric SAR data acquired in interferometric configuration can be combined for 3-D rendering through coherence optimization techniques. If multiple baselines are available, direct tomographic imaging can be employed, and polarimetry both increases separation performance and characterizes the response of each scatterer. Finally, polarimetry finds also application in differential interferometry for subsidence monitoring, for instance, by improving both the number of resolution cells in which the estimate is reliable, and the quality of these estimates.


1989 ◽  
Vol 11 (1) ◽  
pp. 22-41
Author(s):  
A. Herment ◽  
J.P. Guglielmi ◽  
P. Péronneau ◽  
Ph. Dumée

Principles of high-resolution, ultrasonic imaging using data acquisition by a compound scanning with a sector echograph are presented. The signal processing is based on both deconvolution and reflection mode tomography. Three of the methods that can be derived from these principles are selected due to their lower computation costs. Applications of these methods to synthetic data and test targets demonstrate that, with respect to 2D deconvolution, they offer: a gain in computation time of more than 8, an improvement in resolution of the order of 10 and an increase of S/N ratio of the order of 4. Finally, both the effects of limited acquisition angular window and of a variable propagation speed are illustrated.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S11-S18 ◽  
Author(s):  
Juefu Wang ◽  
Mauricio D. Sacchi

We propose a new scheme for high-resolution amplitude-variation-with-ray-parameter (AVP) imaging that uses nonquadratic regularization. We pose migration as an inverse problem and propose a cost function that uses a priori information about common-image gathers (CIGs). In particular, we introduce two regularization constraints: smoothness along the offset-ray-parameter axis and sparseness in depth. The two-step regularization yields high-resolution CIGs with robust estimates of AVP. We use an iterative reweighted least-squares conjugate gradient algorithm to minimize the cost function of the problem. We test the algorithm with synthetic data (a wedge model and the Marmousi data set) and a real data set (Erskine area, Alberta). Tests show our method helps to enhance the vertical resolution of CIGs and improves amplitude accuracy along the ray-parameter direction.


Author(s):  
Cheng-Han (Lance) Tsai ◽  
Jen-Yuan (James) Chang

Abstract Artificial Intelligence (AI) has been widely used in different domains such as self-driving, automated optical inspection, and detection of object locations for the robotic pick and place operations. Although the current results of using AI in the mentioned fields are good, the biggest bottleneck for AI is the need for a vast amount of data and labeling of the corresponding answers for a sufficient training. Evidentially, these efforts still require significant manpower. If the quality of the labelling is unstable, the trained AI model becomes unstable and as consequence, so do the results. To resolve this issue, the auto annotation system is proposed in this paper with methods including (1) highly realistic model generation with real texture, (2) domain randomization algorithm in the simulator to automatically generate abundant and diverse images, and (3) visibility tracking algorithm to calculate the occlusion effect objects cause on each other for different picking strategy labels. From our experiments, we will show 10,000 images can be generated per hour, each having multiple objects and each object being labelled in different classes based on their visibility. Instance segmentation AI models can also be trained with these methods to verify the gaps between performance synthetic data for training and real data for testing, indicating that even at mAP 70 the mean average precision can reach 70%!


2019 ◽  
Vol 37 (2) ◽  
Author(s):  
Misael Possidonio de Souza ◽  
Michelangelo Gomes da Silva ◽  
Milton J. Porsani

ABSTRACT. The Solimões Basin Brazil will still be the subject of many discussions in the future due to the success of oil exploration in the 1970s with the discovery of oil and gas fields. The geology of this basin is characterized by significant thick igneous rocks layers, the diabase sills, which can be seen in any stacked section as reflectors with strong amplitude but low frequency. The high contrast of seismic impedance between the sedimentary rock layers and the diabase sills generate multiple reflection and reverberations that can lead to wrong seismic interpretation of stacked sections. In this work, to improve the quality of the stacked sections, we propose a seismic process flow that includes multiple filtering steps in land data, throughout the Multichannel Predictive Deconvolution and the Parabolic Radon Transform. This study was first performed on synthetic data to test the methodology, and then in real data provided by Agência Nacional de Petróleo, Gás Natural e Biocombustíveis (ANP). The conventional processing flowchart was applied using commercial processing software such as SeisSpace/ProMAX, and Fortran 90 codes available in the Centro de Pesquisa em Geofísica e Geologia, Universidade Federal da Bahia (CPGG/UFBA). The results obtained were satisfactory with the methodology used, besides visible improvements in the quality of the stacked seismic sections after attenuation of unwanted noises. Keywords: multiple attenuation, seismic processing, seismic reflection.RESUMO. A Bacia do Solimões será ainda tema de muitas discussões no futuro, devido ao sucesso da exploração de petróleo nas décadas de 1970 com a descoberta de campos de oléo e gás. A geologia desta bacia é caracterizada por espessas camadas de rochas ígneas, as soleiras de diabásio, que podem ser vistas em toda seção empilhada como refletores com forte amplitude e baixa frequência. O alto contraste de impedância sísmica entre as rochas sedimentares e as soleiras de diabásio gera reflexões múltiplas e reverberações que podem levar a uma interpretação sísmica errada das seções empilhadas. Neste trabalho, para melhorar a qualidade das seções empilhadas, propomos um fluxograma de processamento que adicione etapas de filtragem de múltiplas, através da Deconvolução Preditiva Multicanal e da Transformada Radon parabólica. Este estudo foi realizado primeiramente em dados sintéticos para testar a metodologia, e depois em dados reais cedidos pela Agência Nacional de Petróleo, Gás Natural e Biocombustíveis (ANP). O fluxograma de processamento convencional foi aplicado utilizando software comercial de processamento, como o SeisSpace/ProMAX, códigos implementados em Fortran 90 disponíveis no Centro de Pesquisa em Geofísica e Geologia, Universidade Federal da Bahia (CPGG/UFBA). Os resultados obtidos foram satisfatórios com a metodologia utilizada, além de visíveis melhorias na qualidade das seções sísmicas empilhadas após atenuação dos ruídos indesejados.Palavras-chave: atenuação de múltiplas, processamento sísmico, sísmica de reflexão.


Author(s):  
Sumeet Katariya ◽  
Branislav Kveton ◽  
Csaba Szepesvári ◽  
Claire Vernade ◽  
Zheng Wen

The probability that a user will click a search result depends both on its relevance and its position on the results page. The position based model explains this behavior by ascribing to every item an attraction probability, and to every position an examination probability. To be clicked, a result must be both attractive and examined. The probabilities of an item-position pair being clicked thus form the entries of a rank-1 matrix. We propose the learning problem of a Bernoulli rank-1 bandit where at each step, the learning agent chooses a pair of row and column arms, and receives the product of their Bernoulli-distributed values as a reward. This is a special case of the stochastic rank-1 bandit problem considered in recent work that proposed an elimination based algorithm Rank1Elim, and showed that Rank1Elim's regret scales linearly with the number of rows and columns on "benign" instances. These are the instances where the minimum of the average row and column rewards mu is bounded away from zero. The issue with Rank1Elim is that it fails to be competitive with straightforward bandit strategies as mu tends to 0. In this paper we propose Rank1ElimKL, which replaces the crude confidence intervals of Rank1Elim with confidence intervals based on Kullback-Leibler (KL) divergences. With the help of a novel result concerning the scaling of KL divergences we prove that with this change, our algorithm will be competitive no matter the value of mu. Experiments with synthetic data confirm that on benign instances the performance of Rank1ElimKL is significantly better than that of even Rank1Elim. Similarly, experiments with models derived from real-data confirm that the improvements are significant across the board, regardless of whether the data is benign or not.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


1996 ◽  
Vol 33 (9) ◽  
pp. 101-108 ◽  
Author(s):  
Agnès Saget ◽  
Ghassan Chebbo ◽  
Jean-Luc Bertrand-Krajewski

The first flush phenomenon of urban wet weather discharges is presently a controversial subject. Scientists do not agree with its reality, nor with its influences on the size of treatment works. Those disagreements mainly result from the unclear definition of the phenomenon. The objective of this article is first to provide a simple and clear definition of the first flush and then to apply it to real data and to obtain results about its appearance frequency. The data originate from the French database based on the quality of urban wet weather discharges. We use 80 events from 7 separately sewered basins, and 117 events from 7 combined sewered basins. The main result is that the first flush phenomenon is very scarce, anyway too scarce to be used to elaborate a treatment strategy against pollution generated by urban wet weather discharges.


Sign in / Sign up

Export Citation Format

Share Document