Calculation of the Inverse Data Space via Sparse Inversion

Author(s):  
C. Saragiotis ◽  
P. Doulgeris ◽  
D. J. Verschuur
Keyword(s):  
Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. L7-L15 ◽  
Author(s):  
Mark Pilkington

I have developed an inversion approach that determines a 3D susceptibility distribution that produces a given magnetic anomaly. The subsurface model consists of a 3D, equally spaced array of dipoles. The inversion incorporates a model norm that enforces sparseness and depth weighting of the solution. Sparseness is imposed by using the Cauchy norm on model parameters. The inverse problem is posed in the data space, leading to a linear system of equations with dimensions based on the number of data, [Formula: see text]. This contrasts with the standard least-squares solution, derived through operations within the [Formula: see text]-dimensional model space ([Formula: see text] being the number of model parameters). Hence, the data-space method combined with a conjugate gradient algorithm leads to computational efficiency by dealing with an [Formula: see text] system versus an [Formula: see text] one, where [Formula: see text]. Tests on synthetic data show that sparse inversion produces a much more focused solution compared with a standard model-space, least-squares inversion. The inversion of aeromagnetic data collected over a Precambrian Shield area again shows that including the sparseness constraint leads to a simpler and better resolved solution. The degree of improvement in model resolution for the sparse case is quantified using the resolution matrix.


2012 ◽  
Vol 192 (2) ◽  
pp. 666-670 ◽  
Author(s):  
Joost van der Neut ◽  
Felix J. Herrmann

Abstract Assuming that transmission responses are known between the surface and a particular depth level in the subsurface, seismic sources can be effectively mapped to this level by a process called interferometric redatuming. After redatuming, the obtained wavefields can be used for imaging below this particular depth level. Interferometric redatuming consists of two steps, namely (i) the decomposition of the observed wavefields into downgoing and upgoing constituents and (ii) a multidimensional deconvolution of the upgoing constituents with the downgoing constituents. While this method works in theory, sensitivity to noise and artefacts due to incomplete acquisition require a different formulation. In this letter, we demonstrate the benefits of formulating the two steps that undergird interferometric redatuming in terms of a transform-domain sparsity-promoting program. By exploiting compressibility of seismic wavefields in the curvelet domain, the method not only becomes robust with respect to noise but we are also able to remove certain artefacts while preserving the frequency content. Although we observe improvements when we promote sparsity in the redatumed data space, we expect better results when interferometric redatuming would be combined or integrated with least-squares migration with sparsity promotion in the image space.


Author(s):  
Michael Goul ◽  
T. S. Raghu ◽  
Ziru Li

As procurement organizations increasingly move from a cost-and-efficiency emphasis to a profit-and-growth emphasis, flexible data architecture will become an integral part of a procurement analytics strategy. It is therefore imperative for procurement leaders to understand and address digitization trends in supply chains and to develop strategies to create robust data architecture and analytics strategies for the future. This chapter assesses and examines the ways companies can organize their procurement data architectures in the big data space to mitigate current limitations and to lay foundations for the discovery of new insights. It sets out to understand and define the levels of maturity in procurement organizations as they pertain to the capture, curation, exploitation, and management of procurement data. The chapter then develops a framework for articulating the value proposition of moving between maturity levels and examines what the future entails for companies with mature data architectures. In addition to surveying the practitioner and academic research literature on procurement data analytics, the chapter presents detailed and structured interviews with over fifteen procurement experts from companies around the globe. The chapter finds several important and useful strategies that have helped procurement organizations design strategic roadmaps for the development of robust data architectures. It then further identifies four archetype procurement area data architecture contexts. In addition, this chapter details exemplary high-level mature data architecture for each archetype and examines the critical assumptions underlying each one. Data architectures built for the future need a design approach that supports both descriptive and real-time, prescriptive analytics.


2009 ◽  
Vol 41 (1) ◽  
pp. 499-503 ◽  
Author(s):  
Amruth N. Kumar
Keyword(s):  

2021 ◽  
Vol 10 (4) ◽  
pp. 246
Author(s):  
Vagan Terziyan ◽  
Anton Nikulin

Operating with ignorance is an important concern of geographical information science when the objective is to discover knowledge from the imperfect spatial data. Data mining (driven by knowledge discovery tools) is about processing available (observed, known, and understood) samples of data aiming to build a model (e.g., a classifier) to handle data samples that are not yet observed, known, or understood. These tools traditionally take semantically labeled samples of the available data (known facts) as an input for learning. We want to challenge the indispensability of this approach, and we suggest considering the things the other way around. What if the task would be as follows: how to build a model based on the semantics of our ignorance, i.e., by processing the shape of “voids” within the available data space? Can we improve traditional classification by also modeling the ignorance? In this paper, we provide some algorithms for the discovery and visualization of the ignorance zones in two-dimensional data spaces and design two ignorance-aware smart prototype selection techniques (incremental and adversarial) to improve the performance of the nearest neighbor classifiers. We present experiments with artificial and real datasets to test the concept of the usefulness of ignorance semantics discovery.


2012 ◽  
Author(s):  
Kun Wang ◽  
Richard Su ◽  
Alexander A. Oraevsky ◽  
Mark A. Anastasio

Sign in / Sign up

Export Citation Format

Share Document