scholarly journals Balancing aggregation and smoothing errors in inverse models

2015 ◽  
Vol 15 (12) ◽  
pp. 7039-7048 ◽  
Author(s):  
A. J. Turner ◽  
D. J. Jacob

Abstract. Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

2015 ◽  
Vol 15 (1) ◽  
pp. 1001-1026 ◽  
Author(s):  
A. J. Turner ◽  
D. J. Jacob

Abstract. Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.


Author(s):  
V. V. Pichkur ◽  
D. A. Mazur ◽  
V. V. Sobchuk

The paper proposes an analysis of controllability of a linear discrete system with change of the state vector dimension. We offer necessary and sufficient conditions of controllability and design the control that guarantees the decision of a problem of moving of such system to an arbitrary final state. It provides functional stability of technological processes described by a linear discrete system with change of the state vector dimension.


Author(s):  
Arpan Mukherjee ◽  
Rahul Rai ◽  
Puneet Singla ◽  
Tarunraj Singh ◽  
Abani Patra

Engineering systems are often modeled as a large dimensional random process with additive noise. The analysis of such system involves a solution to simultaneous system of Stochastic Differential Equations (SDE). The exact solution to the SDE is given by the evolution of the probability density function (pdf) of the state vector through the application of Stochastic Calculus. The Fokker-Planck-Kolmogorov Equation (FPKE) provides approximate solution to the SDE by giving the time evolution equation for the non-Gaussian pdf of the state vector. In this paper, we outline a computational framework that combines linearization, clustering technique and the Adaptive Gaussian Mixture Model (AGMM) methodology for solving the Fokker-Planck-Kolmogorov Equation (FPKE) related to a high dimensional system. The linearization and clustering technique facilitate easier decomposition of the overall high dimensional FPKE system into a finite number of much lower dimension FPKE systems. The decomposition enables the solution method to be faster. Numerical simulations test the efficacy of our developed framework.


2016 ◽  
pp. 4039-4042
Author(s):  
Viliam Malcher

The interpretation problems of quantum theory are considered. In the formalism of quantum theory the possible states of a system are described by a state vector. The state vector, which will be represented as |ψ> in Dirac notation, is the most general form of the quantum mechanical description. The central problem of the interpretation of quantum theory is to explain the physical significance of the |ψ>. In this paper we have shown that one of the best way to make of interpretation of wave function is to take the wave function as an operator.


2019 ◽  
Vol 63 (1) ◽  
pp. 25-37
Author(s):  
Lidia Mierzejewska ◽  
Jerzy Parysek

Abstract The complexity of the reality studied by geographical research requires applying such methods which allow describing the state of affairs and ongoing changes in the best possible way. This study aims to present a model of research on selected aspects of the dynamics and structure of socio-economic development. The idea was to determine whether we deal with the process of reducing or widening the differences in terms of individual features. The article primarily pursues a methodological goal, and to a lesser extent an empirical one. The methodological objective of the paper was to propose and verify a multi-aspect approach to the study of development processes. The analyses carried out reveal that in terms of the features taken into account in the set of 24 of the largest Polish cities the dominating processes are those increasing differences between cities, which are unfavourable in the context of the adopted development policies aiming at reducing the existing disparities. In relation to the methodological objective, the results of the conducted research confirm the rationale of the application of the measures of dynamics and the feature variance to determine the character (dynamics and structure) of the socio-economic development process of cities. Comparatively less effective, especially for interpretation, is the application of principal component analysis and a multivariate classification, which is mainly the result of differences in the variance of particular features.


2018 ◽  
Vol 15 (1) ◽  
pp. 12-22
Author(s):  
V. M. Artyushenko ◽  
D. Y. Vinogradov

The article reviewed and analyzed the class of geometrically stable orbits (GUO). The conditions of stability in the model of the geopotential, taking into account the zonal harmonics. The sequence of calculation of the state vector of GUO in the osculating value of the argument of the latitude with the famous Ascoli-royski longitude of the ascending node, inclination and semimajor axis. The simulation is obtained the altitude profiles of SEE regarding the all-earth ellipsoid model of the gravitational field of the Earth given 7 and 32 zonal harmonics.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Chaochen Wang ◽  
Yuming Bo ◽  
Changhui Jiang

Global Positioning System (GPS) and strap-down inertial navigation system (SINS) are recognized as highly complementary and widely employed in the community. The GPS has the advantage of providing precise navigation solutions without divergence, but the GPS signals might be blocked and attenuated. The SINS is a totally self-contained navigation system which is hardly disturbed. The GPS/SINS integration system could utilize the advantages of both the GPS and SINS and provide more reliable navigation solutions. According to the data fusion strategies, the GPS/SINS integrated system could be divided into three different modes: loose, tight, and ultratight integration (LI, TI, and UTC). In the loose integration mode, position and velocity difference from the GPS and SINS are employed to compose measurement vector, in which the vector dimension has nothing to do with the amount of the available satellites. However, in the tight and ultratight modes, difference of pseudoranges and pseudorange rates from the GPS and SINS are employed to compose the measurement vector, in which the measurement vector dimension increases with the amount of available satellites. In addition, compared with the loose integration mode, clock bias and drift are included in the integration state model. The two characteristics magnify the computation load of the tight and ultratight modes. In this paper, a new efficient filter model was proposed and evaluated. Two schemes were included in this design for reducing the computation load. Firstly, a difference between pseudorange measurements was determined, by which clock bias and drift were excluded from the integration state model. This step reduced the dimension of the state vector. Secondly, the integration filter was divided into two subfilters: pseudorange subfilter and pseudorange rate subfilter. A federated filter was utilized to estimate the state errors optimally. In the second step, the two subfilters could run in parallel and the measurement vector was divided into two subvectors with lower dimension. A simulation implemented in MATLAB software was conducted to evaluate the performance of the new efficient integration method in UTC. The simulation results showed that the method could reduce the computation load with the navigation solutions almost unchanged.


2021 ◽  
Vol 13 (2) ◽  
pp. 223
Author(s):  
Zhenyang Hui ◽  
Shuanggen Jin ◽  
Dajun Li ◽  
Yao Yevenyo Ziggah ◽  
Bo Liu

Individual tree extraction is an important process for forest resource surveying and monitoring. To obtain more accurate individual tree extraction results, this paper proposed an individual tree extraction method based on transfer learning and Gaussian mixture model separation. In this study, transfer learning is first adopted in classifying trunk points, which can be used as clustering centers for tree initial segmentation. Subsequently, principal component analysis (PCA) transformation and kernel density estimation are proposed to determine the number of mixed components in the initial segmentation. Based on the number of mixed components, the Gaussian mixture model separation is proposed to separate canopies for each individual tree. Finally, the trunk stems corresponding to each canopy are extracted based on the vertical continuity principle. Six tree plots with different forest environments were used to test the performance of the proposed method. Experimental results show that the proposed method can achieve 87.68% average correctness, which is much higher than that of other two classical methods. In terms of completeness and mean accuracy, the proposed method also outperforms the other two methods.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4811
Author(s):  
Siavash Doshvarpassand ◽  
Xiangyu Wang

Utilising cooling stimulation as a thermal excitation means has demonstrated profound capabilities of detecting sub-surface metal loss using thermography. Previously, a prototype mechanism was introduced which accommodates a thermal camera and cooling source and operates in a reciprocating motion scanning the test piece while cold stimulation is in operation. Immediately after that, the camera registers the thermal evolution. However, thermal reflections, non-uniform stimulation and lateral heat diffusions will remain as undesirable phenomena preventing the effective observation of sub-surface defects. This becomes more challenging when there is no prior knowledge of the non-defective area in order to effectively distinguish between defective and non-defective areas. In this work, the previously automated acquisition and processing pipeline is re-designed and optimised for two purposes: 1—Through the previous work, the mentioned pipeline was used to analyse a specific area of the test piece surface in order to reconstruct the reference area and identify defects. In order to expand the application of this device over the entire test area, regardless of its extension, the pipeline is improved in which the final surface image is reconstructed by taking into account multiple segments of the test surface. The previously introduced pre-processing method of Dynamic Reference Reconstruction (DRR) is enhanced by using a more rigorous thresholding procedure. Principal Component Analysis (PCA) is then used in order for feature (DRR images) reduction. 2—The results of PCA on multiple segment images of the test surface revealed different ranges of intensities across each segment image. This potentially could cause mistaken interpretation of the defective and non-defective areas. An automated segmentation method based on Gaussian Mixture Model (GMM) is used to assist the expert user in more effective detection of the defective areas when the non-defective areas are uniformly characterised as background. The final results of GMM have shown not only the capability of accurately detecting subsurface metal loss as low as 37.5% but also the successful detection of defects that were either unidentifiable or invisible in either the original thermal images or their PCA transformed results.


2018 ◽  
Vol 34 (3) ◽  
pp. 33
Author(s):  
Francisco Dos Santos Panero ◽  
Maria de Fátima Pereira Vieira ◽  
Ângela Maria Paiva Cruz ◽  
Maria de Fátima Vitória De Moura ◽  
Henrique Eduardo Bezerra Da Silva

Samples of okra from Caruaru and Vitória of Santo Antão, in the State of Pernambuco, and Ceará-Mirim, Macaíba and Extremoz in the State of Rio Grande do Norte have been analysed. Two different methods were applied in the data treatment allowing to geographically discriminate samples from different origins: Principal Component Analysis - PCA and Hierarquical Cluster Analysis - HCA.


Sign in / Sign up

Export Citation Format

Share Document