Investigating the resolution of IP arrays using inverse theory

Geophysics ◽  
1995 ◽  
Vol 60 (5) ◽  
pp. 1326-1341 ◽  
Author(s):  
Les P. Beard ◽  
Alan C. Tripp

Using a fast 2-D inverse solution, we examined the resolution of different resistivity/IP arrays using noisy synthetic data subject to minimum structure inversion. We compared estimated models from inversions of data from the dipole‐dipole, pole‐dipole, and pole‐pole arrays over (1) a dipping, polarizable conductor, (2) two proximate conductive, polarizable bodies, (3) a polarizable conductor beneath conductive overburden, and (4) a thin, resistive, polarizable dike. The estimated resistivity and polarizability models obtained from inversion of the dipole‐dipole data were usually similar to the pole‐dipole estimated models. In the cases examined, the estimated models from the pole‐pole data were more poorly resolved than the models from the other arrays. If pole‐pole resistivity data contain even a fraction of a percent of Gaussian noise, the transformation of such data through superposition to equivalent data of other array types may be considerably distorted, and significant information can be lost using the pole‐pole array. Though the gradient array is reputed to be more sensitive to dip than other arrays, it evidently contains little information on dip that does not also appear in dipole‐dipole data, for joint inversion of dipole‐dipole and gradient array data yields models virtually identical to those obtained from inversion of dipole‐dipole data alone.

Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 540
Author(s):  
Soodabeh Asadi ◽  
Janez Povh

This article uses the projected gradient method (PG) for a non-negative matrix factorization problem (NMF), where one or both matrix factors must have orthonormal columns or rows. We penalize the orthonormality constraints and apply the PG method via a block coordinate descent approach. This means that at a certain time one matrix factor is fixed and the other is updated by moving along the steepest descent direction computed from the penalized objective function and projecting onto the space of non-negative matrices. Our method is tested on two sets of synthetic data for various values of penalty parameters. The performance is compared to the well-known multiplicative update (MU) method from Ding (2006), and with a modified global convergent variant of the MU algorithm recently proposed by Mirzal (2014). We provide extensive numerical results coupled with appropriate visualizations, which demonstrate that our method is very competitive and usually outperforms the other two methods.


Geophysics ◽  
2011 ◽  
Vol 76 (4) ◽  
pp. F239-F250 ◽  
Author(s):  
Fernando A. Monteiro Santos ◽  
Hesham M. El-Kaliouby

Joint or sequential inversion of direct current resistivity (DCR) and time-domain electromagnetic (TDEM) data commonly are performed for individual soundings assuming layered earth models. DCR and TDEM have different and complementary sensitivity to resistive and conductive structures, making them suitable methods for the application of joint inversion techniques. This potential joint inversion of DCR and TDEM methods has been used by several authors to reduce the ambiguities of the models calculated from each method separately. A new approach for joint inversion of these data sets, based on a laterally constrained algorithm, was found. The method was developed for the interpretation of soundings collected along a line over a 1D or 2D geology. The inversion algorithm was tested on two synthetic data sets, as well as on field data from Saudi Arabia. The results show that the algorithm is efficient and stable in producing quasi-2D models from DCR and TDEM data acquired in relatively complex environments.


Author(s):  
Bancha Luaphol ◽  
Jantima Polpinij ◽  
Manasawee Kaenampornpan

Most studies relating to bug reports aims to automatically identify necessary information from bug reports for software bug fixing. Unfortunately, the study of bug reports focuses only on one issue, but more complete and comprehensive software bug fixing would be facilitated by assessing multiple issues concurrently. This becomes a challenge in this study, where it aims to present a method of identifying bug reports at severe level from a bug report repository, together with assembling their related bug reports to visualize the overall picture of a software problem domain. The proposed method is called “mining bug report repositories”. Two techniques of text mining are applied as the main mechanisms in this method. First, classification is applied for identifying severe bug reports, called “bug severity classification”, while “threshold-based similarity analysis” is then applied to assemble bug reports that are related to a bug report at severe level. Our datasets are from three opensource namely SeaMonkey, Firefox, and Core:Layout downloaded from the Bugzilla. Finally, the best models from the proposed method are selected and compared with two baseline methods. For identifying severe bug reports using classification technique, the results show that our method improved accuracy, F1, and AUC scores over the baseline by 11.39, 11.63, and 19% respectively. Meanwhile, for assembling related bug reports using threshold-based similarity technique, the results show that our method improved precision, and likelihood scores over the other baseline by 15.76, and 9.14% respectively. This demonstrate that our proposed method may help increasing chance to fix bugs completely.


2021 ◽  
Vol 503 (4) ◽  
pp. 5100-5114
Author(s):  
Sebastian Marino

ABSTRACT The dust production in debris discs by grinding collisions of planetesimals requires their orbits to be stirred. However, stirring levels remain largely unconstrained, and consequently the stirring mechanisms as well. This work shows how the sharpness of the outer edge of discs can be used to constrain the stirring levels. Namely, the sharper the edge the lower the eccentricity dispersion must be. For a Rayleigh distribution of eccentricities (e), I find that the disc surface density near the outer edge can be parametrized as tanh [(rmax  − r)/lout], where rmax  approximates the maximum semimajor axis and lout defines the edge smoothness. If the semimajor axis distribution has sharp edges erms is roughly 1.2lout/rmax  or erms = 0.77lout/rmax  if semimajor axes have diffused due to self-stirring. This model is fitted to Atacama Large Millimeter/submillimeter Array data of five wide discs: HD 107146, HD 92945, HD 206893, AU Mic, and HR 8799. The results show that HD 107146, HD 92945, and AU Mic have the sharpest outer edges, corresponding to erms values of 0.121 ± 0.05, $0.15^{+0.07}_{-0.05}$, and 0.10 ± 0.02 if their discs are self-stirred, suggesting the presence of Pluto-sized objects embedded in the disc. Although these stirring values are larger than typically assumed, the radial stirring of HD 92945 is in good agreement with its vertical stirring constrained by the disc height. HD 206893 and HR 8799, on the other hand, have smooth outer edges that are indicative of scattered discs since both systems have massive inner companions.


2020 ◽  
Vol 6 (10) ◽  
pp. 103
Author(s):  
Ali S. Awad

In this paper, a new method for the removal of Gaussian noise based on two types of prior information is described. The first type of prior information is internal, based on the similarities between the pixels in the noisy image, and the other is external, based on the index or pixel location in the image. The proposed method focuses on leveraging these two types of prior information to obtain tangible results. To this end, very similar patches are collected from the noisy image. This is done by sorting the image pixels in ascending order and then placing them in consecutive rows in a new two-dimensional image. Henceforth, a principal component analysis is applied on the patch matrix to help remove the small noisy components. Since the restored pixels are similar or close in values to those in the clean image, it is preferable to arrange them using indices similar to those of the clean pixels. Simulation experiments show that outstanding results are achieved, compared to other known methods, either in terms of image visual quality or peak signal to noise ratio. Specifically, once the proper indices are used, the proposed method achieves PSNR value better than the other well-known methods by >1.5 dB in all the simulation experiments.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 584
Author(s):  
Riccardo Rossi ◽  
Andrea Murari ◽  
Pasquale Gaudio

Determining the coupling between systems remains a topic of active research in the field of complex science. Identifying the proper causal influences in time series can already be very challenging in the trivariate case, particularly when the interactions are non-linear. In this paper, the coupling between three Lorenz systems is investigated with the help of specifically designed artificial neural networks, called time delay neural networks (TDNNs). TDNNs can learn from their previous inputs and are therefore well suited to extract the causal relationship between time series. The performances of the TDNNs tested have always been very positive, showing an excellent capability to identify the correct causal relationships in absence of significant noise. The first tests on the time localization of the mutual influences and the effects of Gaussian noise have also provided very encouraging results. Even if further assessments are necessary, the networks of the proposed architecture have the potential to be a good complement to the other techniques available in the market for the investigation of mutual influences between time series.


1980 ◽  
Vol 60 (3) ◽  
pp. 511-516 ◽  
Author(s):  
K. W. G. VALENTINE ◽  
D. CHANG

The map index linkages (from CanSIS cartographic file) of seven soil maps were analyzed to find out how many map delineations represented each map unit and what proportion of the map they covered. Many map units were represented by only one or two delineations. This was more true for uncontrolled than controlled legends (51–85% of map units in uncontrolled legends versus 27–37% of map units in controlled legends). In both types of map the map units that had only one or two delineations covered only a small proportion of the land area. On the other hand, only a small proportion of the map units (between 14 and 31%) was needed to cover 75% of the land area in both types of maps. It proved possible to reduce the number of map units in one map with an uncontrolled legend from 193 to 91. This was done, firstly, by combining map units that represented only very small areas (or were represented by only one delineation) with larger map units that were very similar for the purpose of the survey. Secondly, map units were combined when more than 85% of the soils within them were the same. Controlled legends need not be very long and need not omit significant information.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. B259-B273 ◽  
Author(s):  
A. Revil ◽  
M. Karaoulis ◽  
S. Srivastava ◽  
S. Byrdina

Self-potential signals and resistivity data can be jointly inverted or analyzed to track the position of the burning front of an underground coal-seam fire. We first investigate the magnitude of the thermoelectric coupling associated with the presence of a thermal anomaly (thermoelectric current associated with a thermal gradient). A sandbox experiment is developed and modeled to show that in presence of a heat source, a negative self-potential anomaly is expected at the ground surface. The expected sensitivity coefficient is typically on the order of [Formula: see text] in a silica sand saturated by demineralized water. Geophysical field measurements gathered at Marshall (near Boulder, CO) show clearly the position of the burning front in the electrical resistivity tomogram and in the self-potential data gathered at the ground surface with a negative self-potential anomaly of about [Formula: see text]. To localize more accurately the position of the burning front, we developed a strategy based on two steps: (1) We first jointly invert resistivity and self-potential data using a cross-gradient approach, and (2) a joint interpretation of the resistivity and self-potential data is made using a normalized burning front index (NBI). The value of the NBI ranges from 0 to 1 with 1 indicating a high probability to find the burning front (strictly speaking, the NBI is, however, not a probably density). We validate first this strategy using synthetic data and then we apply it to the field data. A clear source is localized at the expected position of the burning front of the coal-seam fire. The NBI determined from the joint inversion is only slightly better than the value determined from independent inversion of the two geophysical data sets.


2017 ◽  
Vol 48 (3-4) ◽  
pp. 44-51 ◽  
Author(s):  
Gert Herold ◽  
Ennes Sarradj

The open-source Python library Acoular is aimed at the processing of microphone array data. It features a number of algorithms for acoustic source characterization in time domain and frequency domain. The modular, object-oriented architecture allows for flexible programming and a multitude of applications. This includes the processing of measured array data, the mapping of sources, the filtering of subcomponent noise, and the generation of synthetic data for test purposes. Several examples illustrating its versatility are given, as well as one example for implementing a new algorithm into the package.


2013 ◽  
Vol 6 (4) ◽  
pp. 7593-7631 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. EPA PMF version 5.0 and the underlying multilinear engine executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Sign in / Sign up

Export Citation Format

Share Document