scholarly journals Sensor Placement via Optimal Experiment Design in EMI Sensing of Metallic Objects

2016 ◽  
Vol 2016 ◽  
pp. 1-14
Author(s):  
Lin-Ping Song ◽  
Leonard R. Pasion ◽  
Nicolas Lhomme ◽  
Douglas W. Oldenburg

This work, under the optimal experimental design framework, investigates the sensor placement problem that aims to guide electromagnetic induction (EMI) sensing of multiple objects. We use the linearized model covariance matrix as a measure of estimation error to present a sequential experimental design (SED) technique. The technique recursively minimizes data misfit to update model parameters and maximizes an information gain function for a future survey relative to previous surveys. The fundamental process of the SED seeks to increase weighted sensitivities to targets when placing sensors. The synthetic and field experiments demonstrate that SED can be used to guide the sensing process for an effective interrogation. It also can serve as a theoretic basis to improve empirical survey operation. We further study the sensitivity of the SED to the number of objects within the sensing range. The tests suggest that an appropriately overrepresented model about expected anomalies might be a feasible choice.

2020 ◽  
Vol 53 (3) ◽  
pp. 800-810
Author(s):  
Frank Heinrich ◽  
Paul A. Kienzle ◽  
David P. Hoogerheide ◽  
Mathias Lösche

A framework is applied to quantify information gain from neutron or X-ray reflectometry experiments [Treece, Kienzle, Hoogerheide, Majkrzak, Lösche & Heinrich (2019). J. Appl. Cryst. 52, 47–59], in an in-depth investigation into the design of scattering contrast in biological and soft-matter surface architectures. To focus the experimental design on regions of interest, the marginalization of the information gain with respect to a subset of model parameters describing the structure is implemented. Surface architectures of increasing complexity from a simple model system to a protein–lipid membrane complex are simulated. The information gain from virtual surface scattering experiments is quantified as a function of the scattering length density of molecular components of the architecture and the surrounding aqueous bulk solvent. It is concluded that the information gain is mostly determined by the local scattering contrast of a feature of interest with its immediate molecular environment, and experimental design should primarily focus on this region. The overall signal-to-noise ratio of the measured reflectivity modulates the information gain globally and is a second factor to be taken into consideration.


Author(s):  
Numa J. Bertola ◽  
Sai G. S. Pai ◽  
Ian F. C. Smith

AbstractThe management of existing civil infrastructure is challenging due to evolving functional requirements, aging and climate change. Civil infrastructure often has hidden reserve capacity because of conservative approaches used in design and during construction. Information collected through sensor measurements has the potential to improve knowledge of structural behavior, leading to better decisions related to asset management. In this situation, the design of the monitoring system is an important task since it directly affects the quality of the information that is collected. Design of optimal measurement systems depends on the choice of behavior-model parameters to identify using monitoring data and non-parametric uncertainty sources. A model that contains a representation of these parameters as variables is called a model class. Selection of the most appropriate model class is often difficult prior to acquisition of information regarding the structural behavior, and this leads to suboptimal sensor placement. This study presents strategies to efficiently design measurement systems when multiple model classes are plausible. This methodology supports the selection of a sensor configuration that provides significant information gain for each model class using a minimum number of sensors. A full-scale bridge, The Powder Mill Bridge (USA), and an illustrative beam example are used to compare methodologies. A modification of the hierarchical algorithm for sensor placement has led to design of configurations that have fewer sensors than previously proposed strategies without compromising information gain.


Author(s):  
Scott N. Walsh ◽  
Tim M. Wildey ◽  
John D. Jakeman

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3400
Author(s):  
Tulay Ercan ◽  
Costas Papadimitriou

A framework for optimal sensor placement (OSP) for virtual sensing using the modal expansion technique and taking into account uncertainties is presented based on information and utility theory. The framework is developed to handle virtual sensing under output-only vibration measurements. The OSP maximizes a utility function that quantifies the expected information gained from the data for reducing the uncertainty of quantities of interest (QoI) predicted at the virtual sensing locations. The utility function is extended to make the OSP design robust to uncertainties in structural model and modeling error parameters, resulting in a multidimensional integral of the expected information gain over all possible values of the uncertain parameters and weighted by their assigned probability distributions. Approximate methods are used to compute the multidimensional integral and solve the optimization problem that arises. The Gaussian nature of the response QoI is exploited to derive useful and informative analytical expressions for the utility function. A thorough study of the effect of model, prediction and measurement errors and their uncertainties, as well as the prior uncertainties in the modal coordinates on the selection of the optimal sensor configuration is presented, highlighting the importance of accounting for robustness to errors and other uncertainties.


2020 ◽  
pp. 136943322094719
Author(s):  
Xianrong Qin ◽  
Pengming Zhan ◽  
Chuanqiang Yu ◽  
Qing Zhang ◽  
Yuantao Sun

Optimal sensor placement is an important component of a reliability structural health monitoring system for a large-scale complex structure. However, the current research mainly focuses on optimizing sensor placement problem for structures without any initial sensor layout. In some cases, the experienced engineers will first determine the key position of whole structure must place sensors, that is, initial sensor layout. Moreover, current genetic algorithm or partheno-genetic algorithm will change the position of the initial sensor locations in the iterative process, so it is unadaptable for optimal sensor placement problem based on initial sensor layout. In this article, an optimal sensor placement method based on initial sensor layout using improved partheno-genetic algorithm is proposed. First, some improved genetic operations of partheno-genetic algorithm for sensor placement optimization with initial sensor layout are presented, such as segmented swap, reverse and insert operator to avoid the change of initial sensor locations. Then, the objective function for optimal sensor placement problem is presented based on modal assurance criterion, modal energy criterion, and sensor placement cost. At last, the effectiveness and reliability of the proposed method are validated by a numerical example of a quayside container crane. Furthermore, the sensor placement result with the proposed method is better than that with effective independence method without initial sensor layout and the traditional partheno-genetic algorithm.


2018 ◽  
Author(s):  
Adel Albaba ◽  
Massimiliano Schwarz ◽  
Corinna Wendeler ◽  
Bernard Loup ◽  
Luuk Dorren

Abstract. This paper presents a Discrete Element-based elasto-plastic-adhesive model which is adapted and tested for producing hillslope debris flows. The numerical model produces three phases of particle contacts: elastic, plastic and adhesion. The model capabilities of simulating different types of cohesive granular flows were tested with different ranges of flow velocities and heights. The basic model parameters, being the basal friction (ϕb) and normal restitution coefficient (ϵn), were calibrated using field experiments of hillslope debris flows impacting two sensors. Simulations of 50 m3 of material were carried out on a channelized surface that is 41 m long and 8 m wide. The calibration process was based on measurements of flow height, flow velocity and the pressure applied to a sensor. Results of the numerical model matched well those of the field data in terms of pressure and flow velocity while less agreement was observed for flow height. Those discrepancies in results were due in part to the deposition of material in the field test which are not reproducible in the model. A parametric study was conducted to further investigate that effect of model parameters and inclination angle on flow height, velocity and pressure. Results of best-fit model parameters against selected experimental tests suggested that a link might exist between the model parameters ϕb and ϵn and the initial conditions of the tested granular material (bulk density and water and fine contents). The good performance of the model against the full-scale field experiments encourages further investigation by conducting lab-scale experiments with detailed variation of water and fine content to better understand their link to the model's parameters.


2000 ◽  
Vol 4 (3) ◽  
pp. 483-498 ◽  
Author(s):  
M. Franchini ◽  
A. M. Hashemi ◽  
P. E. O’Connell

Abstract. The sensitivity analysis described in Hashemi et al. (2000) is based on one-at-a-time perturbations to the model parameters. This type of analysis cannot highlight the presence of parameter interactions which might indeed affect the characteristics of the flood frequency curve (ffc) even more than the individual parameters. For this reason, the effects of the parameters of the rainfall, rainfall runoff models and of the potential evapotranspiration demand on the ffc are investigated here through an analysis of the results obtained from a factorial experimental design, where all the parameters are allowed to vary simultaneously. This latter, more complex, analysis confirms the results obtained in Hashemi et al. (2000) thus making the conclusions drawn there of wider validity and not related strictly to the reference set selected. However, it is shown that two-factor interactions are present not only between different pairs of parameters of an individual model, but also between pairs of parameters of different models, such as rainfall and rainfall-runoff models, thus demonstrating the complex interaction between climate and basin characteristics affecting the ffc and in particular its curvature. Furthermore, the wider range of climatic regime behaviour produced within the factorial experimental design shows that the probability distribution of soil moisture content at the storm arrival time is no longer sufficient to explain the link between the perturbations to the parameters and their effects on the ffc, as was suggested in Hashemi et al. (2000). Other factors have to be considered, such as the probability distribution of the soil moisture capacity, and the rainfall regime, expressed through the annual maximum rainfalls over different durations. Keywords: Monte Carlo simulation; factorial experimental design; analysis of variance (ANOVA)


2019 ◽  
Vol 102 ◽  
pp. 03006
Author(s):  
Oksana A. Grebneva

Both in Russia and abroad, there are works that are devoted to the problem of optimal placement of measuring devices, which are evidenced by the current literature. The proposed methods are not universal, that does not allow them to be directly used for different types of pipeline systems. In addition, the developed algorithms does not guarantee a global solution. In this regard, there is a demand for solving the problem of optimal placement of measuring devices for pipeline systems. At the same time, not only the number and accuracy of measuring devices, but also their composition and placement locations are important. In this paper, a mathematical formulation of the problem of optimal placement of measuring devices is given, methods for its solution are proposed. The numerical example shows the effectiveness of the proposed method of optimal placement of measuring devices, which allows to get a global solution for a previously known finite number of steps.


Energies ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 1242
Author(s):  
Jiangyi Lv ◽  
Hongwen He ◽  
Wei Liu ◽  
Yong Chen ◽  
Fengchun Sun

Accurate and reliable vehicle velocity estimation is greatly motivated by the increasing demands of high-precision motion control for autonomous vehicles and the decreasing cost of the required multi-axis IMU sensors. A practical estimation method for the longitudinal and lateral velocities of electric vehicles is proposed. Two reliable driving empirical judgements about the velocities are extracted from the signals of the ordinary onboard vehicle sensors, which correct the integral errors of the corresponding kinematic equations on a long timescale. Meanwhile, the additive biases of the measured accelerations are estimated recursively by comparing the integral of the measured accelerations with the difference of the estimated velocities between the adjacent strong empirical correction instants, which further compensates the kinematic integral error on short timescale. The algorithm is verified by both the CarSim-Simulink co-simulation and the controller-in-the-loop test under the CarMaker-RoadBox environment. The results show that the velocities can be accurately and reliably estimated under a wide range of driving conditions without prior knowledge of the tire-model and other unavailable signals or frequently changeable model parameters. The relative estimation error of the longitudinal velocity and the absolute estimation error of the lateral velocity are kept within 2% and 0.5 km/h, respectively.


1997 ◽  
Vol 43 (143) ◽  
pp. 180-191 ◽  
Author(s):  
Ε. M. Morris ◽  
H. -P. Bader ◽  
P. Weilenmann

AbstractA physics-based snow model has been calibrated using data collected at Halley Bay, Antarctica, during the International Geophysical Year. Variations in snow temperature and density are well-simulated using values for the model parameters within the range reported from other polar field experiments. The effect of uncertainty in the parameter values on the accuracy of the predictions is no greater than the effect of instrumental error in the input data. Thus, this model can be used with parameters determined a priori rather than by optimization. The model has been validated using an independent data set from Halley Bay and then used to estimate 10 m temperatures on the Antarctic Peninsula plateau over the last half-century.


Sign in / Sign up

Export Citation Format

Share Document