observation vector
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 12)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Author(s):  
◽  
Ross Martyn Renner

<p>Large compositional datasets of the kind assembled in the geosciences are often of remarkably low approximate rank. That is, within a tolerable error, data points representing the rows of such an array can approximately be located in a relatively small dimensional subspace of the row space. A physical mixing process which would account for this phenomenon implies that each observation vector of an array can be estimated by a convex combination of a small number of fixed source or 'endmember' vectors. In practice, neither the compositions of the endmembers nor the coefficients of the convex combinations are known. Traditional methods for attempting to estimate some or all of these quantities have included Q-mode 'factor' analysis and linear programming. In general, neither method is successful. Some of the more important mathematical properties of a convex representation of compositional data are examined in this thesis as well as the background to the development of algorithms for assessing the number of endmembers statistically, locating endmembers and partitioning geological samples into specified endmembers. Keywords and Phrases: Compositional data, convex sets, endmembers, partitioning by least squares, iteration, logratios.</p>


2021 ◽  
Author(s):  
◽  
Ross Martyn Renner

<p>Large compositional datasets of the kind assembled in the geosciences are often of remarkably low approximate rank. That is, within a tolerable error, data points representing the rows of such an array can approximately be located in a relatively small dimensional subspace of the row space. A physical mixing process which would account for this phenomenon implies that each observation vector of an array can be estimated by a convex combination of a small number of fixed source or 'endmember' vectors. In practice, neither the compositions of the endmembers nor the coefficients of the convex combinations are known. Traditional methods for attempting to estimate some or all of these quantities have included Q-mode 'factor' analysis and linear programming. In general, neither method is successful. Some of the more important mathematical properties of a convex representation of compositional data are examined in this thesis as well as the background to the development of algorithms for assessing the number of endmembers statistically, locating endmembers and partitioning geological samples into specified endmembers. Keywords and Phrases: Compositional data, convex sets, endmembers, partitioning by least squares, iteration, logratios.</p>


2021 ◽  
Vol 13 (21) ◽  
pp. 4317
Author(s):  
Peihui Yan ◽  
Jinguang Jiang ◽  
Fangning Zhang ◽  
Dongpeng Xie ◽  
Jiaji Wu ◽  
...  

Aiming at the GNSS receiver vulnerability in challenging urban environments and low power consumption of integrated navigation systems, an improved robust adaptive Kalman filter (IRAKF) algorithm with real-time performance and low computation complexity for single-frequency GNSS/MEMS-IMU/odometer integrated navigation module is proposed. The algorithm obtains the scale factor by the prediction residual, and uses it to adjust the artificially set covariance matrix of the observation vector under different GNSS solution states, so that the covariance matrix of the observation vector changes continuously with the complex scene. Then, the adaptive factor is calculated by the Mahalanobis distance to inflate the state prediction covariance matrix. In addition, the one-step prediction Kalman filter is introduced to reduce the computational complexity of the algorithm. The performance of the algorithm is verified by vehicle experiments in the challenging urban environments. Experiments show that the algorithm can effectively weaken the effects of abnormal model deviations and outliers in the measurements and improve the positioning accuracy of real-time integrated navigation. It can meet the requirements of low power consumption real-time vehicle navigation applications in the complex urban environment.


Author(s):  
Edward K. Ngailo ◽  
Dietrich Von Rosen ◽  
Martin Singull

We propose asymptotic approximations for the probabilities of misclassification in linear discriminant analysis when the group means follow a growth curve structure. The discriminant function can classify a new observation vector of p repeated measurements into one of several multivariate normal populations with equal covariance matrix. We derive certain relations of the statistics under consideration in order to obtain asymptotic approximation of misclassification errors for the two group case. Finally, we perform Monte Carlo simulations to evaluate the reliability of the proposed results.


Measurement ◽  
2021 ◽  
pp. 109250
Author(s):  
Hongliang Zhang ◽  
Yilan Zhou ◽  
Tengchao Huang ◽  
Lei Wang

2021 ◽  
Vol 8 (55) ◽  
pp. 352-377
Author(s):  
Aneta Dzik-Walczak ◽  
Maciej Odziemczyk

Abstract The paper deals with the topic of modelling the probability of bankruptcy of Polish enterprises using convolutional neural networks. Convolutional networks take images as input, so it was thus necessary to apply the method of converting the observation vector to a matrix. Benchmarks for convolutional networks were logit models, random forests, XGBoost, and dense neural networks. Hyperparameters and model architecture were selected based on a random search and analysis of learning curves and experiments in folded, stratified cross-validation. In addition, the sensitivity of the results to data preprocessing was investigated. It was found that convolutional neural networks can be used to analyze cross-sectional tabular data, especially for the problem of modelling the probability of corporate bankruptcy. In order to achieve good results with models based on parameters updated by a gradient (neural networks and logit), it is necessary to use appropriate preprocessing techniques. Models based on decision trees have been shown to be insensitive to the data transformations used.


Author(s):  
J. O. A. Limaverde Filho ◽  
E. L. F. Fortaleza ◽  
J. G. Silva ◽  
M. C. M. M. de Campos

Author(s):  
Haiyun Yao ◽  
Hong Shu ◽  
Hongxing Sun ◽  
B. G. Mousa ◽  
Zhenghang Jiao ◽  
...  

AbstractIndoor positioning navigation technologies have developed rapidly, but little effort has been expended on integrity monitoring in Pedestrian Dead Reckoning (PDR) and WiFi indoor positioning navigation systems. PDR accuracy will drift over time. Meanwhile, WiFi positioning accuracy decreases in complex indoor environments due to severe multipath propagation and interference with signals when people move about. In our research, we aimed to improve positioning quality with an integrity monitoring algorithm for a WiFi/PDR-integrated indoor positioning system based on the unscented Kalman filter (UKF). The integrity monitoring is divided into three phases. A test statistic based on the innovation of UKF determines whether the positioning system is abnormal. Once a positioning system abnormality is detected, a robust UKF (RUKF) is triggered to achieve higher positioning accuracy. Again, the innovation of RUKF is used to judge the outliers in observations and identify positioning system faults. In the last integrity monitoring phase, users will be alerted in time to reduce the risk from positioning fault. We conducted a simulation to analyze the computational complexity of integrity monitoring. The results showed that it did not substantially increase the overall computational complexity when the number of dimensions in the state vector and observation vector in the system is small (< 20). In practice, the number of dimensions of state vector and observation vector in an indoor positioning system rarely exceeds 20. The proposed integrity monitoring algorithm was tested in two field experiments, showing that the proposed algorithm is quite robust, yielding higher positioning accuracy than the traditional method, using only UKF.


2020 ◽  
Author(s):  
Petro Abrykosov ◽  
Roland Pail ◽  
Thomas Gruber

&lt;p&gt;The GRACE-FO&amp;#8217;s mission primary goal is the precise mapping of variations within the Earth&amp;#8217;s hydrology by observing changes in gravitational attraction. This particular signal, however, is widely superimposed by other signals of higher amplitudes and shorter time scales like mass redistributions within atmosphere and ocean (AO). The hereby induced temporal aliasing is treated by reducing tidal background models during the data processing. The residual signal which is caused by imperfections of the background models has so far been left untreated.&lt;/p&gt; &lt;p&gt;We present the DMD approach for further treatment of the residual AO-signal which is somewhat similar to the so-called Wiese parametrization proposed for multi-pair missions (Wiese et al., 2011). However, contrary to Wiese, the DMD directly affects the right-hand side of the least squares adjustment. The method consists of (a) estimating short-term low-degree fields based on the full observation vector in a first step, (b) reducing it by the observations computed from the short-term solutions (thus effectively having applied an entirely data-based de-aliasing model), and finally, (c) estimating a high-degree field over the entire observation period (e.g. a month). Resubstituting the mean of (a) to (c) then yields a gravity solution for the observation interval. The functionality of the DMD is verified by simulations and allows for an improved retrieval of the residual signal above the maximum degree of the short-term fields over the observation interval. In order to counteract potential over-estimation within the low-degree part of the solution additional conditions are introduced and discussed.&lt;/p&gt;


Author(s):  
Marcelo Tomio Matsuoka ◽  
Vinicius Francisco Rofatto ◽  
Ivandro Klein ◽  
Mauricio Roberto Veronez ◽  
Luiz Gonzaga Da Silveira ◽  
...  

Geodetic networks are essential for most geodetic, geodynamics and civil projects, such as monitoring the position and deformation of man-made structures, monitoring the crustal deformation of the Earth, establishing and maintaining a geospatial reference frame, mapping, civil engineering projects and so on. Before the installation of geodetic marks and gathering of survey data, geodetic networks need to be designed according to the pre-established quality criteria. In this study, we present a method for designing geodetic networks based on the concept of reliability. We highlight that the method discards the use of the observation vector of Gauss-Markov model. In fact, the only needs are the geometrical network configuration and the uncertainties of the observations. The aim of the proposed method is to find the optimum configuration of the geodetic control points so that the maximum influence of an outlier on the coordinates of the network is minimum. Here, the concept of Minimal Detectable Bias defines the size of the outlier and its propagation on the parameters is used to describe the external reliability. The proposed method is demonstrated by practical application of one simulated levelling network. We highlight that the method can be applied not only for geodetic network problems, but also in any branch of modern science.


Sign in / Sign up

Export Citation Format

Share Document