Efficient 3D inversion of magnetic data via octree-mesh discretization, space-filling curves, and wavelets

Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. J61-J73 ◽  
Author(s):  
Kristofer Davis ◽  
Yaoguo Li

Airborne magnetic survey data sets can contain from hundreds of thousands to millions of observations and typically cover large areas. The large number of measurements combined with a model mesh to accommodate the survey extent can render an inversion of these data intractable. Faced with this challenge, we have developed a three-step procedure to locally optimize the degree of model discretization and to compress the corresponding sensitivity matrix for the inversion of magnetic data. The mesh optimization is achieved through the use of adaptive octree discretization. The compression is achieved by first reordering the model cells using the Hilbert space filling curve and then applying the one-dimensional wavelet transform to the corresponding sensitivities. The fractal property of the Hilbert curve groups the spatially adjacent cells into algebraically adjacent positions in the reordered model mesh and thereby maximizes the number of zero or near-zero coefficients in the one-dimensional wavelet transform. Winnowing these insignificant coefficients finally leads to a highly sparse representation of the sensitivity matrix, which dramatically reduces the required memory and CPU time in the inversion. As a result, the proposed algorithm is capable of inverting huge data sets ([Formula: see text] measurements) with commensurate model sizes in a short time on a single desktop computer. As a test, we inverted an entire magnetic data set with 170,000 observations from a large uranium exploration program and achieved a reduction in computational cost exceeding 10,000 times.

2020 ◽  
Vol 223 (2) ◽  
pp. 1378-1397
Author(s):  
Rosemary A Renaut ◽  
Jarom D Hogue ◽  
Saeed Vatankhah ◽  
Shuang Liu

SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada.


Author(s):  
Tu Renwei ◽  
Zhu Zhongjie ◽  
Bai Yongqiang ◽  
Gao Ming ◽  
Ge Zhifeng

Unmanned Aerial Vehicle (UAV) inspection has become one of main methods for current transmission line inspection, but there are still some shortcomings such as slow detection speed, low efficiency, and inability for low light environment. To address these issues, this paper proposes a deep learning detection model based on You Only Look Once (YOLO) v3. On the one hand, the neural network structure is simplified, that is the three feature maps of YOLO v3 are pruned into two to meet specific detection requirements. Meanwhile, the K-means++ clustering method is used to calculate the anchor value of the data set to improve the detection accuracy. On the other hand, 1000 sets of power tower and insulator data sets are collected, which are inverted and scaled to expand the data set, and are fully optimized by adding different illumination and viewing angles. The experimental results show that this model using improved YOLO v3 can effectively improve the detection accuracy by 6.0%, flops by 8.4%, and the detection speed by about 6.0%.


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


Fractals ◽  
1996 ◽  
Vol 04 (04) ◽  
pp. 469-475 ◽  
Author(s):  
ZBIGNIEW R. STRUZIK

The methodology of the solution to the inverse fractal problem with the wavelet transform1,2 is extended to two-dimensional self-affine functions. Similar to the one-dimensional case, the two-dimensional wavelet maxima bifurcation representation used is derived from the continuous wavelet decomposition. It possesses translational and scale invariance necessary to reveal the invariance of the self-affine fractal. As many fractals are naturally defined on two-dimensions, this extension constitutes an important step towards solving the related inverse fractal problem for a variety of fractal types.


Geophysics ◽  
2020 ◽  
pp. 1-41 ◽  
Author(s):  
Jens Tronicke ◽  
Niklas Allroggen ◽  
Felix Biermann ◽  
Florian Fanselow ◽  
Julien Guillemoteau ◽  
...  

In near-surface geophysics, ground-based mapping surveys are routinely employed in a variety of applications including those from archaeology, civil engineering, hydrology, and soil science. The resulting geophysical anomaly maps of, for example, magnetic or electrical parameters are usually interpreted to laterally delineate subsurface structures such as those related to the remains of past human activities, subsurface utilities and other installations, hydrological properties, or different soil types. To ease the interpretation of such data sets, we propose a multi-scale processing, analysis, and visualization strategy. Our approach relies on a discrete redundant wavelet transform (RWT) implemented using cubic-spline filters and the à trous algorithm, which allows to efficiently compute a multi-scale decomposition of 2D data using a series of 1D convolutions. The basic idea of the approach is presented using a synthetic test image, while our archaeo-geophysical case study from North-East Germany demonstrates its potential to analyze and process rather typical geophysical anomaly maps including magnetic and topographic data. Our vertical-gradient magnetic data show amplitude variations over several orders of magnitude, complex anomaly patterns at various spatial scales, and typical noise patterns, while our topographic data show a distinct hill structure superimposed by a microtopographic stripe pattern and random noise. Our results demonstrate that the RWT approach is capable to successfully separate these components and that selected wavelet planes can be scaled and combined so that the reconstructed images allow for a detailed, multi-scale structural interpretation also using integrated visualizations of magnetic and topographic data. Because our analysis approach is straightforward to implement without laborious parameter testing and tuning, computationally efficient, and easily adaptable to other geophysical data sets, we believe that it can help to rapidly analyze and interpret different geophysical mapping data collected to address a variety of near-surface applications from engineering practice and research.


Geophysics ◽  
1993 ◽  
Vol 58 (9) ◽  
pp. 1281-1296 ◽  
Author(s):  
V. J. S. Grauch

The magnetic data set compiled for the Decade of North American Geology (DNAG) project presents an important digital data base that can be used to examine the North American crust. The data represent a patchwork from many individual airborne and marine magnetic surveys. However, the portion of data for the conterminous U.S. has problems that limit the resolution and use of the data. Now that the data are available in digital form, it is important to describe the data limitations more specifically than before. The primary problem is caused by datum shifts between individual survey boundaries. In the western U.S., the DNAG data are generally shifted less than 100 nT. In the eastern U.S., the DNAG data may be shifted by as much as 300 nT and contain regionally shifted areas with wavelengths on the order of 800 to 1400 km. The worst case is the artificial low centered over Kentucky and Tennessee produced by a series of datum shifts. A second significant problem is lack of anomaly resolution that arises primarily from using survey data that is too widely spaced compared to the flight heights above magnetic sources. Unfortunately, these are the only data available for much of the U.S. Another problem is produced by the lack of common observation surface between individual pieces of the U.S. DNAG data. The height disparities introduce variations in spatial frequency content that are unrelated to the magnetization of rocks. The spectral effects of datum shifts and the variation of spatial frequency content due to height disparities were estimated for the DNAG data for the conterminous U.S. As a general guideline for digital filtering, the most reliable features in the U.S. DNAG data have wavelengths roughly between 170 and 500 km, or anomaly half‐widths between 85 and 250 km. High‐quality, large‐region magnetic data sets have become increasingly important to meet exploration and scientific objectives. The acquisition of a new national magnetic data set with higher quality at a greater range of wavelengths is clearly in order. The best approach is to refly much of the U.S. with common specifications and reduction procedures. At the very least, magnetic data sets should be remerged digitally using available or newly flown long‐distance flight‐line data to adjust survey levels. In any case, national coordination is required to produce a consistent, high‐quality national magnetic map.


2015 ◽  
Vol 5 (3) ◽  
pp. 350-380 ◽  
Author(s):  
Abdifatah Ahmed Haji ◽  
Sanni Mubaraq

Purpose – The purpose of this paper is to examine the impact of corporate governance and ownership structure attributes on firm performance following the revised code on corporate governance in Malaysia. The study presents a longitudinal assessment of the compliance and implications of the revised code on firm performance. Design/methodology/approach – Two data sets consisting of before (2006) and after (2008-2010) the revised code are examined. Drawing from the largest companies listed on Bursa Malaysia (BM), the first data set contains 92 observations in the year 2006 while the second data set comprises of 282 observations drawn from the largest companies listed on BM over a three-year period, from 2008-2010. Both accounting (return on assets and return on equity) and market performance (Tobin’s Q) measures were used to measure firm performance. Multiple and panel data regression analyses were adopted to analyze the data. Findings – The study shows that there were still cases of non-compliance to the basic requirements of the code such as the one-third independent non-executive director (INDs) requirement even after the revised code. While the regression models indicate marginal significance of board size and independent directors before the revised code, the results indicate all corporate governance variables have a significant negative relationship with at least one of the measures of corporate performance. Independent chairperson, however, showed a consistent positive impact on firm performance both before and after the revised code. In addition, ownership structure elements were found to have a negative relationship with either accounting or market performance measures, with institutional ownership showing a consistent negative impact on firm performance. Firm size and leverage, as control variables, were significant in determining corporate performance. Research limitations/implications – One limitation is the use of separate measures of corporate governance attributes, as opposed to a corporate governance index (CGI). As a result, the study constructs a CGI based on the recommendations of the revised code and proposes for future research use. Practical implications – Some of the largest companies did not even comply with basic requirements such as the “one-third INDs” mandatory requirement. Hence, the regulators may want to reinforce the requirements of the code and also detail examples of good governance practices. The results, which show a consistent positive relationship between the presence of an independent chairperson and firm performance in both data sets, suggest listed companies to consider appointing an independent chairperson in the corporate leadership. The regulatory authorities may also wish to note this phenomenon when drafting any future corporate governance codes. Originality/value – This study offers new insights of the implications of regulatory changes on the relationship between corporate governance attributes and firm performance from the perspective of a developing country. The development of a CGI for future research is a novel approach of this study.


2014 ◽  
Vol 2 (4) ◽  
pp. SJ35-SJ45 ◽  
Author(s):  
Juarez Lourenço ◽  
Paulo T. L. Menezes ◽  
Valeria C. F. Barbosa

We interpreted northwest-trending transfer faults whose extensions are not entirely mapped in the Precambrian basement of the onshore and offshore Campos Basin. To enhance the subtle northwest–southeast lineaments not clearly seen in the total-field data, we reprocessed and merged two airborne magnetic data sets aiming at producing a single merged magnetic data set. Next, we applied a directional filter to these integrated magnetic data. Finally, we applied a multiscale edge detection method to these filtered data. This combination allowed the detection of edges and ridges that are used to produce several northwest–southeast lineations. We interpreted these northwest-trending lineations as magnetic expressions of transfer faults that cut across the onshore adjacent basement of the Campos Basin to the shallow and deep Campos Basin waters. These interpreted northwest-trending faults suggested the continuity of the known northwest-striking transfer faults in the deep Campos Basin waters toward the shallow Campos Basin waters and the adjacent continent. Moreover, our interpreted northwest-trending faults revealed the control of several known oilfields in the Campos Basin. This result supported the hypothesis of the influence of the northwest–southeast-trending transfer faults on the petroleum system of Campos Basin, which were reactivated in the Tertiary providing a pathway for the turbidite sedimentation, reworking, and redistribution of several deepwater reservoirs. In addition, it was hypothesized that this faulting system controlled the hydrocarbon migration paths from the presalt source rocks through salt windows into basal suprasalt layers.


2016 ◽  
Author(s):  
Brecht Martens ◽  
Diego G. Miralles ◽  
Hans Lievens ◽  
Robin van der Schalie ◽  
Richard A. M. de Jeu ◽  
...  

Abstract. The Global Land Evaporation Amsterdam Model (GLEAM) is a set of algorithms dedicated to the estimation of terrestrial evaporation and root-zone soil moisture from satellite data. Ever since its development in 2011, the model has been regularly revised aiming at the optimal incorporation of new satellite-observed geophysical variables, and improving the representation of physical processes. In this study, the next version of this model (v3) is presented. Key changes relative to the previous version include: (1) a revised formulation of the evaporative stress, (2) an optimized drainage algorithm, and (3) a new soil moisture data assimilation system. GLEAM v3 is used to produce three new data sets of terrestrial evaporation and root-zone soil moisture, including a 35-year data set spanning the period 1980–2014 (v3.0a, based on satellite-observed soil moisture, vegetation optical depth and snow water equivalents, reanalysis air temperature and radiation, and a multi-source precipitation product), and two fully satellite-based data sets. The latter two share most of their forcing, except for the vegetation optical depth and soil moisture products, which are based on observations from different passive and active C- and L-band microwave sensors (European Space Agency Climate Change Initiative data sets) for the first data set (v3.0b, spanning the period 2003–2015) and observations from the Soil Moisture and Ocean Salinity satellite in the second data set (v3.0c, spanning the period 2011–2015). These three data sets are described in detail, compared against analogous data sets generated using the previous version of GLEAM (v2), and validated against measurements from 64 eddy-covariance towers and 2338 soil moisture sensors across a broad range of ecosystems. Results indicate that the quality of the v3 soil moisture is consistently better than the one from v2: average correlations against in situ surface soil moisture measurements increase from 0.61 to 0.64 in case of the v3.0a data set and the representation of soil moisture in the second layer improves as well, with correlations increasing from 0.47 to 0.53. Similar improvements are observed for the two fully satellite-based data sets. Despite regional differences, the quality of the evaporation fluxes remains overall similar as the one obtained using the previous version of GLEAM, with average correlations against eddy-covariance measurements between 0.78 and 0.80 for the three different data sets. These global data sets of terrestrial evaporation and root-zone soil moisture are now openly available at http://GLEAM.eu and may be used for large-scale hydrological applications, climate studies and research on land-atmosphere feedbacks.


Geophysics ◽  
2000 ◽  
Vol 65 (3) ◽  
pp. 791-803 ◽  
Author(s):  
Weerachai Siripunvaraporn ◽  
Gary Egbert

There are currently three types of algorithms in use for regularized 2-D inversion of magnetotelluric (MT) data. All seek to minimize some functional which penalizes data misfit and model structure. With the most straight‐forward approach (exemplified by OCCAM), the minimization is accomplished using some variant on a linearized Gauss‐Newton approach. A second approach is to use a descent method [e.g., nonlinear conjugate gradients (NLCG)] to avoid the expense of constructing large matrices (e.g., the sensitivity matrix). Finally, approximate methods [e.g., rapid relaxation inversion (RRI)] have been developed which use cheaply computed approximations to the sensitivity matrix to search for a minimum of the penalty functional. Approximate approaches can be very fast, but in practice often fail to converge without significant expert user intervention. On the other hand, the more straightforward methods can be prohibitively expensive to use for even moderate‐size data sets. Here, we present a new and much more efficient variant on the OCCAM scheme. By expressing the solution as a linear combination of rows of the sensitivity matrix smoothed by the model covariance (the “representers”), we transform the linearized inverse problem from the M-dimensional model space to the N-dimensional data space. This method is referred to as DASOCC, the data space OCCAM’s inversion. Since generally N ≪ M, this transformation by itself can result in significant computational saving. More importantly the data space formulation suggests a simple approximate method for constructing the inverse solution. Since MT data are smooth and “redundant,” a subset of the representers is typically sufficient to form the model without significant loss of detail. Computations required for constructing sensitivities and the size of matrices to be inverted can be significantly reduced by this approximation. We refer to this inversion as REBOCC, the reduced basis OCCAM’s inversion. Numerical experiments on synthetic and real data sets with REBOCC, DASOCC, NLCG, RRI, and OCCAM show that REBOCC is faster than both DASOCC and NLCG, which are comparable in speed. All of these methods are significantly faster than OCCAM, but are not competitive with RRI. However, even with a simple synthetic data set, we could not always get RRI to converge to a reasonable solution. The basic idea behind REBOCC should be more broadly applicable, in particular to 3-D MT inversion.


Sign in / Sign up

Export Citation Format

Share Document