scholarly journals How to account for temporal correlations with a diagonal correlation model in a nonlinear functional model: a plane fitting with simulated and real TLS measurements

2020 ◽  
Vol 95 (1) ◽  
Author(s):  
Gaël Kermarrec ◽  
Michael Lösler

AbstractTo avoid computational burden, diagonal variance covariance matrices (VCM) are preferred to describe the stochasticity of terrestrial laser scanner (TLS) measurements. This simplification neglects correlations and affects least-squares (LS) estimates that are trustworthy with minimal variance, if the correct stochastic model is used. When a linearization of the LS functional model is performed, a bias of the parameters to be estimated and their dispersions occur, which can be investigated using a second-order Taylor expansion. Both the computation of the second-order solution and the account for correlations are linked to computational burden. In this contribution, we study the impact of an enhanced stochastic model on that bias to weight the corresponding benefits against the improvements. To that aim, we model the temporal correlations of TLS measurements using the Matérn covariance function, combined with an intensity model for the variance. We study further how the scanning configuration influences the solution. Because neglecting correlations may be tempting to avoid VCM inversions and multiplications, we quantify the impact of such a reduction and propose an innovative yet simple way to account for correlations with a “diagonal VCM.” Originally developed for GPS measurements and linear LS, this model is extended and validated for TLS range and called the diagonal correlation model (DCM).

2017 ◽  
Vol 11 (3) ◽  
Author(s):  
Tobias Jurek ◽  
Heiner Kuhlmann ◽  
Christoph Holst

AbstractIn terms of high precision requested deformation analyses, evaluating laser scan data requires the exact knowledge of the functional and stochastic model. If this is not given, a parameter estimation leads to insufficient results. Simulating a laser scanning scene provides the knowledge of the exact functional model of the surface. Thus, it is possible to investigate the impact of neglecting spatial correlations in the stochastic model. Here, this impact is quantified through statistical analysis.The correlation function, the number of scanning points and the ratio of colored noise in the measurements determine the covariances in the simulated observations. It is shown that even for short correlation lengths of less than 10 cm and a low ratio of colored noise the global test as well as the parameter test are rejected. This indicates a bias and inconsistency in the parameter estimation. These results are transferable to similar tasks of laser scanner based surface approximation.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4566
Author(s):  
Dominik Prochniewicz ◽  
Kinga Wezka ◽  
Joanna Kozuchowska

The stochastic model, together with the functional model, form the mathematical model of observation that enables the estimation of the unknown parameters. In Global Navigation Satellite Systems (GNSS), the stochastic model is an especially important element as it affects not only the accuracy of the positioning model solution, but also the reliability of the carrier-phase ambiguity resolution (AR). In this paper, we study in detail the stochastic modeling problem for Multi-GNSS positioning models, for which the standard approach used so far was to adopt stochastic parameters from the Global Positioning System (GPS). The aim of this work is to develop an individual, empirical stochastic model for each signal and each satellite block for GPS, GLONASS, Galileo and BeiDou systems. The realistic stochastic model is created in the form of a fully populated variance-covariance (VC) matrix that takes into account, in addition to the Carrier-to-Noise density Ratio (C/N0)-dependent variance function, also the cross- and time-correlations between the observations. The weekly measurements from a zero-length and very short baseline are utilized to derive stochastic parameters. The impact on the AR and solution accuracy is analyzed for different positioning scenarios using the modified Kalman Filter. Comparing the positioning results obtained for the created model with respect to the results for the standard elevation-dependent model allows to conclude that the individual empirical stochastic model increases the accuracy of positioning solution and the efficiency of AR. The optimal solution is achieved for four-system Multi-GNSS solution using fully populated empirical model individual for satellite blocks, which provides a 2% increase in the effectiveness of the AR (up to 100%), an increase in the number of solutions with errors below 5 mm by 37% and a reduction in the maximum error by 6 mm compared to the Multi-GNSS solution using the elevation-dependent model with neglected measurements correlations.


2014 ◽  
Vol 8 (4) ◽  
Author(s):  
Christoph Holst ◽  
Heiner Kuhlmann

AbstractWhen using terrestrial laser scanners for high quality analyses, calibrating the laser scanner is crucial due to unavoidable misconstruction of the instrument leading to systematic errors. Consequently, the development of calibration fields for laser scanner self-calibration is widespread in the literature. However, these calibration fields altogether suffer from the fact that the calibration parameters are estimated by analyzing the parameter differences of a limited number of substitute objects (targets or planes) scanned from different stations. This study investigates the potential of self-calibrating a laser scanner by scanning one single object with one single scan. This concept is new since it uses the deviation of each sampling point to the scanned object for calibration. Its applicability rests upon the integration of model knowledge that is used to parameterize the scanned object. Results show that this calibration approach is feasible leading to improved surface approximations. However, it makes great demands on the functional model of the calibration parameters, the stochastic model of the adjustment, the scanned object and the scanning geometry. Hence, to gain constant and physically interpretable calibration parameters, further improvement especially regarding functional and stochastic model is demanded.


2020 ◽  
pp. 003151252098308
Author(s):  
Bianca G. Martins ◽  
Wanderson R. da Silva ◽  
João Marôco ◽  
Juliana A. D. B. Campos

In this study we proposed to estimate the impact of lifestyle, negative affectivity, and college students’ personal characteristics on eating behavior. We aimed to verify that negative affectivity moderates the relationship between lifestyle and eating behavior. We assessed eating behaviors of cognitive restraint (CR), uncontrolled eating (UE), and emotional eating (EE)) with the Three-Factor Eating Questionnaire-18. We assessed lifestyle with the Individual Lifestyle Profile, and we assessed negative affectivity with the Depression, Anxiety and Stress Scale-21. We constructed and tested (at p < .05) a hypothetical causal structural model that considered global (second-order) and specific (first-order) lifestyle components, negative affectivity and sample characteristics for each eating behavior dimension. Participants were 1,109 college students ( M age = 20.9, SD = 2.7 years; 65.7% females). We found significant impacts of lifestyle second-order components on negative affectivity (β = −0.57–0.19; p < 0.001–0.01) in all models. Physical and psychological lifestyle components impacted directly only on CR (β=−0.32–0.81; p < 0.001). Negative affectivity impacted UE and EE (β = 0.23–0.30; p < 0.001). For global models, we found no mediation pathways between lifestyle and CR or UE. For specific models, negative affectivity was a mediator between stress management and UE (β=−0.07; p < 0.001). Negative affectivity also mediated the relationship between thoughts of dropping an undergraduate course and UE and EE (β = 0.06–0.08; p < 0.001). Participant sex and weight impacted all eating behavior dimensions (β = 0.08–0.34; p < 0.001–0.01). Age was significant for UE and EE (β=−0,14– −0.09; p < 0.001–0.01). Economic stratum influenced only CR (β = 0.08; p = 0.01). In sum, participants’ lifestyle, negative emotions and personal characteristics were all relevant for eating behavior assessment.


Energies ◽  
2019 ◽  
Vol 12 (8) ◽  
pp. 1486 ◽  
Author(s):  
Nicolas Tobin ◽  
Adam Lavely ◽  
Sven Schmitz ◽  
Leonardo P. Chamorro

The dependence of temporal correlations in the power output of wind-turbine pairs on atmospheric stability is explored using theoretical arguments and wind-farm large-eddy simulations. For this purpose, a range of five distinct stability regimes, ranging from weakly stable to moderately convective, were investigated with the same aligned wind-farm layout used among simulations. The coherence spectrum between turbine pairs in each simulation was compared to theoretical predictions. We found with high statistical significance (p < 0.01) that higher levels of atmospheric instability lead to higher coherence between turbines, with wake motions reducing correlations up to 40%. This is attributed to higher dominance of atmospheric motions over wakes in strongly unstable flows. Good agreement resulted with the use of an empirical model for wake-added turbulence to predict the variation of turbine power coherence with ambient turbulence intensity (R 2 = 0.82), though other empirical relations may be applicable. It was shown that improperly accounting for turbine–turbine correlations can substantially impact power variance estimates on the order of a factor of 4.


2014 ◽  
Vol 986-987 ◽  
pp. 377-382 ◽  
Author(s):  
Hui Min Gao ◽  
Jian Min Zhang ◽  
Chen Xi Wu

Heuristic methods by first order sensitivity analysis are often used to determine location of capacitors of distribution power system. The selected nodes by first order sensitivity analysis often have virtual high by first order sensitivities, which could not obtain the optimal results. This paper presents an effective method to optimally determine the location and capacities of capacitors of distribution systems, based on an innovative approach by the second order sensitivity analysis and hierarchical clustering. The approach determines the location by the second order sensitivity analysis. Comparing with the traditional method, the new method considers the nonlinear factor of power flow equation and the impact of the latter selected compensation nodes on the previously selected compensation location. This method is tested on a 28-bus distribution system. Digital simulation results show that the reactive power optimization plan with the proposed method is more economic while maintaining the same level of effectiveness.


2020 ◽  
Vol 10 (1) ◽  
pp. 57-82
Author(s):  
Katarzyna Guczalska

Wolfgang Merkel’s concept of rooted democracy in the context of contemporary populism and the crisis of democracy: The article presents the concept of “rooted democracy” by Wolf‐ gang Merkel, which was presented in the context of the democratic crisis. The German poli‐ tical scientist indicates what democracy is — specifying the proper functioning of the regula‐ tions of the democratic system (regimes). Speaking of the weakness or strength of democracy, we must have a well‐described set of system principles that determine the degree of strength of democracy or its erosion. The above set of principles of the democratic system is thoroughly discussed in the article. In particular, the functional model of civil society is analysed. The text also explores how the crisis of democracy is understood and Merkel’s view of the impact of global capitalism on democratic institutions, which contributes to the transformation of democracy into an oligarchy. The topics discussed in the article also concern alternative, non‐ ‐liberal forms of democracy and populism. The question is whether Merkel’s concept is useful in explaining populism and its political consequences.


2013 ◽  
Vol 9 (2) ◽  
pp. 871-886 ◽  
Author(s):  
M. Casado ◽  
P. Ortega ◽  
V. Masson-Delmotte ◽  
C. Risi ◽  
D. Swingedouw ◽  
...  

Abstract. In mid and high latitudes, the stable isotope ratio in precipitation is driven by changes in temperature, which control atmospheric distillation. This relationship forms the basis for many continental paleoclimatic reconstructions using direct (e.g. ice cores) or indirect (e.g. tree ring cellulose, speleothem calcite) archives of past precipitation. However, the archiving process is inherently biased by intermittency of precipitation. Here, we use two sets of atmospheric reanalyses (NCEP (National Centers for Environmental Prediction) and ERA-interim) to quantify this precipitation intermittency bias, by comparing seasonal (winter and summer) temperatures estimated with and without precipitation weighting. We show that this bias reaches up to 10 °C and has large interannual variability. We then assess the impact of precipitation intermittency on the strength and stability of temporal correlations between seasonal temperatures and the North Atlantic Oscillation (NAO). Precipitation weighting reduces the correlation between winter NAO and temperature in some areas (e.g. Québec, South-East USA, East Greenland, East Siberia, Mediterranean sector) but does not alter the main patterns of correlation. The correlations between NAO, δ18O in precipitation, temperature and precipitation weighted temperature are investigated using outputs of an atmospheric general circulation model enabled with stable isotopes and nudged using reanalyses (LMDZiso (Laboratoire de Météorologie Dynamique Zoom)). In winter, LMDZiso shows similar correlation values between the NAO and both the precipitation weighted temperature and δ18O in precipitation, thus suggesting limited impacts of moisture origin. Correlations of comparable magnitude are obtained for the available observational evidence (GNIP (Global Network of Isotopes in Precipitation) and Greenland ice core data). Our findings support the use of archives of past δ18O for NAO reconstructions.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2964 ◽  
Author(s):  
Gaël Kermarrec ◽  
Hamza Alkhatib ◽  
Ingo Neumann

For a trustworthy least-squares (LS) solution, a good description of the stochastic properties of the measurements is indispensable. For a terrestrial laser scanner (TLS), the range variance can be described by a power law function with respect to the intensity of the reflected signal. The power and scaling factors depend on the laser scanner under consideration, and could be accurately determined by means of calibrations in 1d mode or residual analysis of LS adjustment. However, such procedures complicate significantly the use of empirical intensity models (IM). The extent to which a point-wise weighting is suitable when the derived variance covariance matrix (VCM) is further used in a LS adjustment remains moreover questionable. Thanks to closed loop simulations, where both the true geometry and stochastic model are under control, we investigate how variations of the parameters of the IM affect the results of a LS adjustment. As a case study, we consider the determination of the Cartesian coordinates of the control points (CP) from a B-splines curve. We show that a constant variance can be assessed to all the points of an object having homogeneous properties, without affecting the a posteriori variance factor or the loss of efficiency of the LS solution. The results from a real case scenario highlight that the conclusions of the simulations stay valid even for more challenging geometries. A procedure to determine the range variance is proposed to simplify the computation of the VCM.


Sign in / Sign up

Export Citation Format

Share Document