scholarly journals Scalar magnetic difference inversion applied to UAV-based UXO detection

2020 ◽  
Vol 224 (1) ◽  
pp. 468-486
Author(s):  
Mick Emil Kolster ◽  
Arne Døssing

SUMMARY During scalar magnetic surveys, where the amplitude of the magnetic field is measured, small changes in towed sensor positions can produce complex noise-resembling signals in the data. For well-constructed measurement systems, these signals often contain valuable information, rather than noise, but it can difficult to realize their potential. We present a simple, general approach, which can be used to directly invert data from scalar magnetic surveys, regardless of dynamic or unexpected sensor position variations. The approach generalizes classic along-track gradients to an iterative, or recursive, difference, that can be applied irrespective of the amount of magnetic sensors and their positions within a dynamic measurement system, as long as these are known. The computed difference can be inverted directly, providing a versatile method with very little data pre-processing requirements, which we denote as recursive difference inversion. We explain the approach in a general setting, and expand it to provide a complete framework for Unexploded Ordnance (UXO) detection using a point-dipole model. Being an extension of classic along-track gradients, the method retains many of the same properties, which include added robustness to external time-dependent disturbances, and the ability to produce aesthetic visual data representations. In addition, the framework requires neither tie lines, data levelling, nor diurnal corrections. Only light pre-processing actions, namely initial survey trimming and data position calculation, are required. The method is demonstrated on data from a dual sensor system, conventionally referred to as a vertical gradiometer, which is towed from an Unmanned Aerial Vehicle. The system enables collection of high-quality magnetic data in adverse settings, and simultaneously reduces the risk of inadvertent UXO detonations. To enable qualitative testing, we established a UXO detection test facility with several buried UXO, typical to World War II, in a magnetically complex in-land area. Data from the test facility was mainly used to evaluate inversion robustness and depth accuracy of the point-dipole model. Subsequently, we apply the method to real UXO survey data collected for the Hornsea II offshore wind farm project in the United Kingdom. This data set was collected in a coastal setting, and subject to significant sensor position changes during flight due to varying wind conditions over multiple survey days. This makes the raw data set challenging to interpret directly, but it can still be easily and reliably inverted for source locations through recursive difference inversion. In each of the two data sets, we attempt to recover UXO positions using recursive difference inversion on data from both a single sensor, as well as on data from two synchronized sensors, in each case inverting the difference directly for point-dipole model parameters. To seed the inversion, we propose a simple routine for picking out potential targets, based on the choice of a significant peak prominence in the time-series of computed differences. Higher order difference inversion was found to provide robust results in the magnetically complex setting, and the recovered equivalent dipole depths were found to approximate the actual UXO depths well.

Geophysics ◽  
2002 ◽  
Vol 67 (6) ◽  
pp. 1753-1768 ◽  
Author(s):  
Yuji Mitsuhata ◽  
Toshihiro Uchida ◽  
Hiroshi Amano

Interpretation of controlled‐source electromagnetic (CSEM) data is usually based on 1‐D inversions, whereas data of direct current (dc) resistivity and magnetotelluric (MT) measurements are commonly interpreted by 2‐D inversions. We have developed an algorithm to invert frequency‐Domain vertical magnetic data generated by a grounded‐wire source for a 2‐D model of the earth—a so‐called 2.5‐D inversion. To stabilize the inversion, we adopt a smoothness constraint for the model parameters and adjust the regularization parameter objectively using a statistical criterion. A test using synthetic data from a realistic model reveals the insufficiency of only one source to recover an acceptable result. In contrast, the joint use of data generated by a left‐side source and a right‐side source dramatically improves the inversion result. We applied our inversion algorithm to a field data set, which was transformed from long‐offset transient electromagnetic (LOTEM) data acquired in a Japanese oil and gas field. As demonstrated by the synthetic data set, the inversion of the joint data set automatically converged and provided a better resultant model than that of the data generated by each source. In addition, our 2.5‐D inversion accounted for the reversals in the LOTEM measurements, which is impossible using 1‐D inversions. The shallow parts (above about 1 km depth) of the final model obtained by our 2.5‐D inversion agree well with those of a 2‐D inversion of MT data.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


2017 ◽  
Vol 37 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Haluk Ay ◽  
Anthony Luscher ◽  
Carolyn Sommerich

Purpose The purpose of this study is to design and develop a testing device to simulate interaction between human hand–arm dynamics, right-angle (RA) computer-controlled power torque tools and joint-tightening task-related variables. Design/methodology/approach The testing rig can simulate a variety of tools, tasks and operator conditions. The device includes custom data-acquisition electronics and graphical user interface-based software. The simulation of the human hand–arm dynamics is based on the rig’s four-bar mechanism-based design and mechanical components that provide adjustable stiffness (via pneumatic cylinder) and mass (via plates) and non-adjustable damping. The stiffness and mass values used are based on an experimentally validated hand–arm model that includes a database of model parameters. This database is with respect to gender and working posture, corresponding to experienced tool operators from a prior study. Findings The rig measures tool handle force and displacement responses simultaneously. Peak force and displacement coefficients of determination (R2) between rig estimations and human testing measurements were 0.98 and 0.85, respectively, for the same set of tools, tasks and operator conditions. The rig also provides predicted tool operator acceptability ratings, using a data set from a prior study of discomfort in experienced operators during torque tool use. Research limitations/implications Deviations from linearity may influence handle force and displacement measurements. Stiction (Coulomb friction) in the overall rig, as well as in the air cylinder piston, is neglected. The rig’s mechanical damping is not adjustable, despite the fact that human hand–arm damping varies with respect to gender and working posture. Deviations from these assumptions may affect the correlation of the handle force and displacement measurements with those of human testing for the same tool, task and operator conditions. Practical implications This test rig will allow the rapid assessment of the ergonomic performance of DC torque tools, saving considerable time in lineside applications and reducing the risk of worker injury. DC torque tools are an extremely effective way of increasing production rate and improving torque accuracy. Being a complex dynamic system, however, the performance of DC torque tools varies in each application. Changes in worker mass, damping and stiffness, as well as joint stiffness and tool program, make each application unique. This test rig models all of these factors and allows quick assessment. Social implications The use of this tool test rig will help to identify and understand risk factors that contribute to musculoskeletal disorders (MSDs) associated with the use of torque tools. Tool operators are subjected to large impulsive handle reaction forces, as joint torque builds up while tightening a fastener. Repeated exposure to such forces is associated with muscle soreness, fatigue and physical stress which are also risk factors for upper extremity injuries (MSDs; e.g. tendinosis, myofascial pain). Eccentric exercise exertions are known to cause damage to muscle tissue in untrained individuals and affect subsequent performance. Originality/value The rig provides a novel means for quantitative, repeatable dynamic evaluation of RA powered torque tools and objective selection of tightening programs. Compared to current static tool assessment methods, dynamic testing provides a more realistic tool assessment relative to the tool operator’s experience. This may lead to improvements in tool or controller design and reduction in associated musculoskeletal discomfort in operators.


2020 ◽  
Vol 70 (1) ◽  
pp. 145-161 ◽  
Author(s):  
Marnus Stoltz ◽  
Boris Baeumer ◽  
Remco Bouckaert ◽  
Colin Fox ◽  
Gordon Hiscott ◽  
...  

Abstract We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with novel numerical algorithms. The diffusion approach allows for analysis of data sets containing hundreds or thousands of individuals. The method, which we call Snapper, has been implemented as part of the BEAST2 package. We conducted simulation experiments to assess numerical error, computational requirements, and accuracy recovering known model parameters. A reanalysis of soybean SNP data demonstrates that the models implemented in Snapp and Snapper can be difficult to distinguish in practice, a characteristic which we tested with further simulations. We demonstrate the scale of analysis possible using a SNP data set sampled from 399 fresh water turtles in 41 populations. [Bayesian inference; diffusion models; multi-species coalescent; SNP data; species trees; spectral methods.]


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U25-U38 ◽  
Author(s):  
Nuno V. da Silva ◽  
Andrew Ratcliffe ◽  
Vetle Vinje ◽  
Graham Conroy

Parameterization lies at the center of anisotropic full-waveform inversion (FWI) with multiparameter updates. This is because FWI aims to update the long and short wavelengths of the perturbations. Thus, it is important that the parameterization accommodates this. Recently, there has been an intensive effort to determine the optimal parameterization, centering the fundamental discussion mainly on the analysis of radiation patterns for each one of these parameterizations, and aiming to determine which is best suited for multiparameter inversion. We have developed a new parameterization in the scope of FWI, based on the concept of kinematically equivalent media, as originally proposed in other areas of seismic data analysis. Our analysis is also based on radiation patterns, as well as the relation between the perturbation of this set of parameters and perturbation in traveltime. The radiation pattern reveals that this parameterization combines some of the characteristics of parameterizations with one velocity and two Thomsen’s parameters and parameterizations using two velocities and one Thomsen’s parameter. The study of perturbation of traveltime with perturbation of model parameters shows that the new parameterization is less ambiguous when relating these quantities in comparison with other more commonly used parameterizations. We have concluded that our new parameterization is well-suited for inverting diving waves, which are of paramount importance to carry out practical FWI successfully. We have demonstrated that the new parameterization produces good inversion results with synthetic and real data examples. In the latter case of the real data example from the Central North Sea, the inverted models show good agreement with the geologic structures, leading to an improvement of the seismic image and flatness of the common image gathers.


2020 ◽  
pp. 1-22
Author(s):  
Luis E. Nieto-Barajas ◽  
Rodrigo S. Targino

ABSTRACT We propose a stochastic model for claims reserving that captures dependence along development years within a single triangle. This dependence is based on a gamma process with a moving average form of order $p \ge 0$ which is achieved through the use of poisson latent variables. We carry out Bayesian inference on model parameters and borrow strength across several triangles, coming from different lines of businesses or companies, through the use of hierarchical priors. We carry out a simulation study as well as a real data analysis. Results show that reserve estimates, for the real data set studied, are more accurate with our gamma dependence model as compared to the benchmark over-dispersed poisson that assumes independence.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
K. S. Sultan ◽  
A. S. Al-Moisheer

We discuss the two-component mixture of the inverse Weibull and lognormal distributions (MIWLND) as a lifetime model. First, we discuss the properties of the proposed model including the reliability and hazard functions. Next, we discuss the estimation of model parameters by using the maximum likelihood method (MLEs). We also derive expressions for the elements of the Fisher information matrix. Next, we demonstrate the usefulness of the proposed model by fitting it to a real data set. Finally, we draw some concluding remarks.


Processes ◽  
2018 ◽  
Vol 6 (8) ◽  
pp. 126 ◽  
Author(s):  
Lina Aboulmouna ◽  
Shakti Gupta ◽  
Mano Maurya ◽  
Frank DeVilbiss ◽  
Shankar Subramaniam ◽  
...  

The goal-oriented control policies of cybernetic models have been used to predict metabolic phenomena such as the behavior of gene knockout strains, complex substrate uptake patterns, and dynamic metabolic flux distributions. Cybernetic theory builds on the principle that metabolic regulation is driven towards attaining goals that correspond to an organism’s survival or displaying a specific phenotype in response to a stimulus. Here, we have modeled the prostaglandin (PG) metabolism in mouse bone marrow derived macrophage (BMDM) cells stimulated by Kdo2-Lipid A (KLA) and adenosine triphosphate (ATP), using cybernetic control variables. Prostaglandins are a well characterized set of inflammatory lipids derived from arachidonic acid. The transcriptomic and lipidomic data for prostaglandin biosynthesis and conversion were obtained from the LIPID MAPS database. The model parameters were estimated using a two-step hybrid optimization approach. A genetic algorithm was used to determine the population of near optimal parameter values, and a generalized constrained non-linear optimization employing a gradient search method was used to further refine the parameters. We validated our model by predicting an independent data set, the prostaglandin response of KLA primed ATP stimulated BMDM cells. We show that the cybernetic model captures the complex regulation of PG metabolism and provides a reliable description of PG formation.


Sign in / Sign up

Export Citation Format

Share Document