orthogonal distance regression
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 2)

H-INDEX

11
(FIVE YEARS 0)

Author(s):  
B Posselt ◽  
A Karastergiou ◽  
S Johnston ◽  
A Parthasarathy ◽  
M J Keith ◽  
...  

Abstract We present pulse width measurements for a sample of radio pulsars observed with the MeerKAT telescope as part of the Thousand-Pulsar-Array (TPA) programme in the MeerTime project. For a centre frequency of 1284 MHz, we obtain 762 W10 measurements across the total bandwidth of 775 MHz, where W10 is the width at the 10 per cent level of the pulse peak. We also measure about 400 W10 values in each of the four or eight frequency sub-bands. Assuming, the width is a function of the rotation period P, this relationship can be described with a power law with power law index μ = −0.29 ± 0.03. However, using orthogonal distance regression, we determine a steeper power law with μ = −0.63 ± 0.06. A density plot of the period-width data reveals such a fit to align well with the contours of highest density. Building on a previous population synthesis model, we obtain population-based estimates of the obliquity of the magnetic axis with respect to the rotation axis for our pulsars. Investigating the width changes over frequency, we unambiguously identify a group of pulsars that have width broadening at higher frequencies. The measured width changes show a monotonic behaviour with frequency for the whole TPA pulsar population, whether the pulses are becoming narrower or broader with increasing frequency. We exclude a sensitivity bias, scattering and noticeable differences in the pulse component numbers as explanations for these width changes, and attempt an explanation using a qualitative model of five contributing Gaussian pulse components with flux density spectra that depend on their rotational phase.


Author(s):  
N. Amiri ◽  
P. Polewski ◽  
W. Yao ◽  
P. Krzystek ◽  
A. K. Skidmore

Airborne Laser Scanning (ALS) is a widespread method for forest mapping and management purposes. While common ALS techniques provide valuable information about the forest canopy and intermediate layers, the point density near the ground may be poor due to dense overstory conditions. The current study highlights a new method for detecting stems of single trees in 3D point clouds obtained from high density ALS with a density of 300 points/m<sup>2</sup>. Compared to standard ALS data, due to lower flight height (150&amp;ndash;200&amp;thinsp;m) this elevated point density leads to more laser reflections from tree stems. In this work, we propose a three-tiered method which works on the point, segment and object levels. First, for each point we calculate the likelihood that it belongs to a tree stem, derived from the radiometric and geometric features of its neighboring points. In the next step, we construct short stem segments based on high-probability stem points, and classify the segments by considering the distribution of points around them as well as their spatial orientation, which encodes the prior knowledge that trees are mainly vertically aligned due to gravity. Finally, we apply hierarchical clustering on the positively classified segments to obtain point sets corresponding to single stems, and perform ℓ<sub>1</sub>-based orthogonal distance regression to robustly fit lines through each stem point set. The ℓ<sub>1</sub>-based method is less sensitive to outliers compared to the least square approaches. From the fitted lines, the planimetric tree positions can then be derived. Experiments were performed on two plots from the Hochficht forest in Oberösterreich region located in Austria.We marked a total of 196 reference stems in the point clouds of both plots by visual interpretation. The evaluation of the automatically detected stems showed a classification precision of 0.86 and 0.85, respectively for Plot 1 and 2, with recall values of 0.7 and 0.67.


Nukleonika ◽  
2016 ◽  
Vol 61 (4) ◽  
pp. 443-451
Author(s):  
Matěj Tomeš ◽  
Vladimír Weinzettl ◽  
Tiago Pereira ◽  
Martin Imríšek ◽  
Jakub Seidl

Abstract A high-resolution spectroscopic system for the measurements of the CIII triplet at 465 nm was installed at the COMPASS tokamak. The Doppler broadening and shift of the measured spectral lines are used to calculate the edge ion temperature and poloidal plasma rotation. At first, the spectroscopic system based on two-grating spectrometer and the calibration procedure is described. The signal processing including detection and removal of spiky features in the signal caused by hard X-rays based on the difference in the behaviour of Savitzky-Golay and median filters is explained. The detection and position estimation of individual spectral lines based on the continuous wavelet transform is shown. The method of fitting of Gaussians using the orthogonal distance regression and estimation of the error of estimation of the rotation velocity and ion temperature is described. At the end, conclusions about the performance of the spectroscopic system and its shortcomings based on summary of results calculated from 2033 processed spectral lines measured in 61 shots are drawn and the possible enhancements are suggested.


Author(s):  
C. Li ◽  
X. J. Liu ◽  
T. Deng

Over-parameterization and over-correction are two of the major problems in the rational function model (RFM). A new approach of optimized RFM (ORFM) is proposed in this paper. By synthesizing stepwise selection, orthogonal distance regression, and residual systematic error correction model, the proposed ORFM can solve the ill-posed problem and over-correction problem caused by constant term. The least square, orthogonal distance, and the ORFM are evaluated with control and check grids generated from satellite observation Terre (SPOT-5) high-resolution satellite data. Experimental results show that the accuracy of the proposed ORFM, with 37 essential RFM parameters, is more accurate than the other two methods, which contain 78 parameters, in cross-track and along-track plane. Moreover, the over-parameterization and over-correction problems have been efficiently alleviated by the proposed ORFM, so the stability of the estimated RFM parameters and its accuracy have been significantly improved.


Author(s):  
C. Li ◽  
X. J. Liu ◽  
T. Deng

Over-parameterization and over-correction are two of the major problems in the rational function model (RFM). A new approach of optimized RFM (ORFM) is proposed in this paper. By synthesizing stepwise selection, orthogonal distance regression, and residual systematic error correction model, the proposed ORFM can solve the ill-posed problem and over-correction problem caused by constant term. The least square, orthogonal distance, and the ORFM are evaluated with control and check grids generated from satellite observation Terre (SPOT-5) high-resolution satellite data. Experimental results show that the accuracy of the proposed ORFM, with 37 essential RFM parameters, is more accurate than the other two methods, which contain 78 parameters, in cross-track and along-track plane. Moreover, the over-parameterization and over-correction problems have been efficiently alleviated by the proposed ORFM, so the stability of the estimated RFM parameters and its accuracy have been significantly improved.


Author(s):  
JoseLuis Olazagoitia ◽  
Alberto López

Determining the parameters in existing tire models (e.g. Magic Formula (MF)) for calculating longitudinal and lateral forces depending on the tire slip is often based on standard least squares techniques. This type of optimization minimizes the vertical differences in the ordinate axis between the test data and the chosen tire model. Although the practice is to use this type of optimization in adjusting those model parameters, it should be noted that this approach disregards the errors that have been committed in the measurement of tire slips. These inaccuracies in the measured data affect the optimum parameters of the model, producing non optimum models. This paper presents a methodology to improve the fitting of mathematical tire models on available test data, taking into account the vertical errors together with errors in the independent variable.


2015 ◽  
Vol 23 (11) ◽  
pp. 3192-3199
Author(s):  
林虎 LIN Hu ◽  
石照耀 SHI Zhao-yao ◽  
薛梓 XUE Zi ◽  
杨国梁 YANG Guo-liang

2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Dana D. Marković ◽  
Branislava M. Lekić ◽  
Vladana N. Rajaković-Ognjanović ◽  
Antonije E. Onjia ◽  
Ljubinka V. Rajaković

Numerous regression approaches to isotherm parameters estimation appear in the literature. The real insight into the proper modeling pattern can be achieved only by testing methods on a very big number of cases. Experimentally, it cannot be done in a reasonable time, so the Monte Carlo simulation method was applied. The objective of this paper is to introduce and compare numerical approaches that involve different levels of knowledge about the noise structure of the analytical method used for initial and equilibrium concentration determination. Six levels of homoscedastic noise and five types of heteroscedastic noise precision models were considered. Performance of the methods was statistically evaluated based on median percentage error and mean absolute relative error in parameter estimates. The present study showed a clear distinction between two cases. When equilibrium experiments are performed only once, for the homoscedastic case, the winning error function is ordinary least squares, while for the case of heteroscedastic noise the use of orthogonal distance regression or Margart’s percent standard deviation is suggested. It was found that in case when experiments are repeated three times the simple method of weighted least squares performed as well as more complicated orthogonal distance regression method.


Sign in / Sign up

Export Citation Format

Share Document