scholarly journals Elimination of some unknown parameters and its effect on outlier detection

2012 ◽  
Vol 18 (3) ◽  
pp. 347-362 ◽  
Author(s):  
Serif Hekimoglu ◽  
Bahattin Erdogan ◽  
Nursu Tunalioglu

Outliers in observation set badly affect all the estimated unknown parameters and residuals, that is because outlier detection has a great importance for reliable estimation results. Tests for outliers (e.g. Baarda's and Pope's tests) are frequently used to detect outliers in geodetic applications. In order to reduce the computational time, sometimes elimination of some unknown parameters, which are not of interest, is performed. In this case, although the estimated unknown parameters and residuals do not change, the cofactor matrix of the residuals and the redundancies of the observations change. In this study, the effects of the elimination of the unknown parameters on tests for outliers have been investigated. We have proved that the redundancies in initial functional model (IFM) are smaller than the ones in reduced functional model (RFM) where elimination is performed. To show this situation, a horizontal control network was simulated and then many experiences were performed. According to simulation results, tests for outlier in IFM are more reliable than the ones in RFM.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4566
Author(s):  
Dominik Prochniewicz ◽  
Kinga Wezka ◽  
Joanna Kozuchowska

The stochastic model, together with the functional model, form the mathematical model of observation that enables the estimation of the unknown parameters. In Global Navigation Satellite Systems (GNSS), the stochastic model is an especially important element as it affects not only the accuracy of the positioning model solution, but also the reliability of the carrier-phase ambiguity resolution (AR). In this paper, we study in detail the stochastic modeling problem for Multi-GNSS positioning models, for which the standard approach used so far was to adopt stochastic parameters from the Global Positioning System (GPS). The aim of this work is to develop an individual, empirical stochastic model for each signal and each satellite block for GPS, GLONASS, Galileo and BeiDou systems. The realistic stochastic model is created in the form of a fully populated variance-covariance (VC) matrix that takes into account, in addition to the Carrier-to-Noise density Ratio (C/N0)-dependent variance function, also the cross- and time-correlations between the observations. The weekly measurements from a zero-length and very short baseline are utilized to derive stochastic parameters. The impact on the AR and solution accuracy is analyzed for different positioning scenarios using the modified Kalman Filter. Comparing the positioning results obtained for the created model with respect to the results for the standard elevation-dependent model allows to conclude that the individual empirical stochastic model increases the accuracy of positioning solution and the efficiency of AR. The optimal solution is achieved for four-system Multi-GNSS solution using fully populated empirical model individual for satellite blocks, which provides a 2% increase in the effectiveness of the AR (up to 100%), an increase in the number of solutions with errors below 5 mm by 37% and a reduction in the maximum error by 6 mm compared to the Multi-GNSS solution using the elevation-dependent model with neglected measurements correlations.


2016 ◽  
Vol 680 ◽  
pp. 82-85
Author(s):  
Jian Cai ◽  
Lan Chen ◽  
Umezuruike Linus Opara

OBJECTIVE To investigate the influence of mesh type on numerical simulating the dispersion performance of micro-powders through a home-made tube. METHODS With the computational fluid dynamics (CFD) method, a powder dispersion tube was meshed in three different types, namely, tetrahedral, unstructured hexahedral and prismatic-tetrahedral hybrid meshes. The inner flow field and the kinetic characteristics of the particles were investigated. Results of the numerical simulation were compared with literature evidences. RESULTS The results showed that using tetrahedral mesh had the highest computational efficiency, while employing the unstructured hexahedral mesh obtained more accurate outlet velocity. The simulation results of the inner flow field and the kinetic characteristics of the particles were slightly different among the three mesh types. The calculated particle velocity using the tetrahedral mesh had the best correlation with the changing trend of the fine particle mass in the first 4 stages of the new generation impactor (NGI) (R2 = 0.91 and 0.89 for powder A and B, respectively). Conclusions Mesh type affected computational time, accuracy of simulation results and the prediction abilities of fine particle deposition.


Author(s):  
Pooja Verma

Integration procedures are employed to increase and enhance computing networks and their application domain. Extensive studies towards the integration of MANET with the internet have been studied and worked towards addressing various challenges for such integration. Some idyllic mechanisms always fail due to the presence of some nasty node or other problems such as face alteration and eavesdropping. The focus of this chapter is on the design and discovery of secure gateway scheme in MANET employing trust-based security factors such as route trust and load ability. Over these, the elliptic curve cryptography is applied to achieve confidentiality, integrity, and authentication while selecting optimum gateway node that has less bandwidth, key storage space, and faster computational time. Simulation results of the security protocol through SPAN for AVISPA tool have shown encouraging results over two model checkers namely OFMC and CL-AtSe.


Energies ◽  
2019 ◽  
Vol 12 (5) ◽  
pp. 866 ◽  
Author(s):  
Aqdas Naz ◽  
Muhammad Javed ◽  
Nadeem Javaid ◽  
Tanzila Saba ◽  
Musaed Alhussein ◽  
...  

A Smart Grid (SG) is a modernized grid to provide efficient, reliable and economic energy to the consumers. Energy is the most important resource in the world. An efficient energy distribution is required as smart devices are increasing dramatically. The forecasting of electricity consumption is supposed to be a major constituent to enhance the performance of SG. Various learning algorithms have been proposed to solve the forecasting problem. The sole purpose of this work is to predict the price and load efficiently. The first technique is Enhanced Logistic Regression (ELR) and the second technique is Enhanced Recurrent Extreme Learning Machine (ERELM). ELR is an enhanced form of Logistic Regression (LR), whereas, ERELM optimizes weights and biases using a Grey Wolf Optimizer (GWO). Classification and Regression Tree (CART), Relief-F and Recursive Feature Elimination (RFE) are used for feature selection and extraction. On the basis of selected features, classification is performed using ELR. Cross validation is done for ERELM using Monte Carlo and K-Fold methods. The simulations are performed on two different datasets. The first dataset, i.e., UMass Electric Dataset is multi-variate while the second dataset, i.e., UCI Dataset is uni-variate. The first proposed model performed better with UMass Electric Dataset than UCI Dataset and the accuracy of second model is better with UCI than UMass. The prediction accuracy is analyzed on the basis of four different performance metrics: Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), Mean Square Error (MSE) and Root Mean Square Error (RMSE). The proposed techniques are then compared with four benchmark schemes. The comparison is done to verify the adaptivity of the proposed techniques. The simulation results show that the proposed techniques outperformed benchmark schemes. The proposed techniques efficiently increased the prediction accuracy of load and price. However, the computational time is increased in both scenarios. ELR achieved almost 5% better results than Convolutional Neural Network (CNN) and almost 3% than LR. While, ERELM achieved almost 6% better results than ELM and almost 5% than RELM. However, the computational time is almost 20% increased with ELR and 50% with ERELM. Scalability is also addressed for the proposed techniques using half-yearly and yearly datasets. Simulation results show that ELR gives 5% better results while, ERELM gives 6% better results when used for yearly dataset.


2008 ◽  
Vol 22 (24) ◽  
pp. 4175-4188 ◽  
Author(s):  
YANG TANG ◽  
JIAN-AN FANG ◽  
LIANG CHEN

In this paper, a simple and systematic adaptive feedback method for achieving lag projective stochastic perturbed synchronization of a new four-wing chaotic system with unknown parameters is presented. Moreover, a secure communication scheme based on the adaptive feedback lag projective synchronization of the new chaotic systems with stochastic perturbation and unknown parameters is presented. The simulation results show the feasibility of the proposed method.


Author(s):  
Fu Zhang ◽  
Ehsan Keikha ◽  
Behrooz Shahsavari ◽  
Roberto Horowitz

This paper presents an online adaptive algorithm to compensate damping and stiffness frequency mismatches in rate integrating Coriolis Vibratory Gyroscopes (CVGs). The proposed adaptive compensator consists of a least square estimator that estimates the damping and frequency mismatches, and an online compensator that corrects the mismatches. In order to improve the adaptive compensator’s convergence rate, we introduce a calibration phase where we identify relations between the unknown parameters (i.e. mismatches, rotation rate and rotation angle). Calibration results show that the unknown parameters lie on a hyperplane. When the gyro is in operation, we project parameters estimated from the least square estimator onto the hyperplane. The projection will reduce the degrees of freedom in parameter estimates, thus guaranteeing persistence of excitation and improving convergence rate. Simulation results show that utilization of the projection method will drastically improve convergence rate of the least square estimator and improve gyro performance.


2020 ◽  
pp. 002029402091986
Author(s):  
Xiaocui Yuan ◽  
Huawei Chen ◽  
Baoling Liu

Clustering analysis is one of the most important techniques in point cloud processing, such as registration, segmentation, and outlier detection. However, most of the existing clustering algorithms exhibit a low computational efficiency with the high demand for computational resources, especially for large data processing. Sometimes, clusters and outliers are inseparable, especially for those point clouds with outliers. Most of the cluster-based algorithms can well identify cluster outliers but sparse outliers. We develop a novel clustering method, called spatial neighborhood connected region labeling. The method defines spatial connectivity criterion, finds points connections based on the connectivity criterion among the k-nearest neighborhood region and classifies connected points to the same cluster. Our method can accurately and quickly classify datasets using only one parameter k. Comparing with K-means, hierarchical clustering and density-based spatial clustering of applications with noise methods, our method provides better accuracy using less computational time for data clustering. For applications in the outlier detection of the point cloud, our method can identify not only cluster outliers, but also sparse outliers. More accurate detection results are achieved compared to the state-of-art outlier detection methods, such as local outlier factor and density-based spatial clustering of applications with noise.


2019 ◽  
Vol 16 (07) ◽  
pp. 1950050
Author(s):  
Adarsh Anand ◽  
Richie Aggarwal ◽  
Ompal Singh

With the purpose of understanding differing shapes of sales curve (unimodal and bimodal) this paper discusses a naive way for viewing the diffusion process for consumer durables. In this paper, a step functional model involving two-step Weibull distribution with four unknown parameters is characterized wherein the shape of the density function of the models depends upon the shape and scale parameter of Weibull distribution. Empirical analysis on real life sales datasets indicates that the Weibull step function model is more flexible and fits better than the other models.


Water ◽  
2018 ◽  
Vol 10 (9) ◽  
pp. 1269 ◽  
Author(s):  
Yun Choi ◽  
Mun-Ju Shin ◽  
Kyung Kim

The choice of the computational time step (dt) value and the method for setting dt can have a bearing on the accuracy and performance of a simulation, and this effect has not been comprehensively researched across different simulation conditions. In this study, the effects of the fixed time step (FTS) method and the automatic time step (ATS) method on the simulated runoff of a distributed rainfall–runoff model were compared. The results revealed that the ATS method had less peak flow variability than the FTS method for the virtual catchment. In the FTS method, the difference in time step had more impact on the runoff simulation results than the other factors such as differences in the amount of rainfall, the density of the stream network, or the spatial resolution of the input data. Different optimal parameter values according to the computational time step were found when FTS and ATS were used in a real catchment, and the changes in the optimal parameter values were smaller in ATS than in FTS. The results of our analyses can help to yield reliable runoff simulation results.


Sign in / Sign up

Export Citation Format

Share Document