Accuracy Assessment of TanDEM-X 90 and CartoDEM Using ICESat-2 Datasets for Plain Regions of Ratlam City and Surroundings

2021 ◽  
Vol 10 (1) ◽  
pp. 59
Author(s):  
Unnati Yadav ◽  
Ashutosh Bhardwaj

The spaceborne LiDAR dataset from the Ice, Cloud, and Land Elevation Satellite (ICESat-2) provides highly accurate measurements of heights for the Earth’s surface, which helps in terrain analysis, visualization, and decision making for many applications. TanDEM-X 90 (90 m) and CartoDEM V3R1 (30 m) elevation are among the high-quality openly accessible DEM datasets for the plain regions in India. These two DEMs are validated against the ICESat-2 elevation datasets for the relatively plain areas of Ratlam City and its surroundings. The mean error (ME), mean absolute error (MAE), and root mean square error (RMSE) of TanDEM-X 90 DEM are 1.35 m, 1.48 m, and 2.19 m, respectively. The computed ME, MAE, and RMSE for CartoDEM V3R1 are 3.05 m, 3.18 m, and 3.82 m, respectively. The statistical results reveal that TanDEM-X 90 performs better in plain areas than CartoDEMV3R1. The study further indicates that these DEMs and spaceborne LiDAR datasets can be useful for planning various works requiring height as an important parameter, such as the layout of pipelines or cut and fill calculations for various construction activities. The TanDEM-X 90 can assist planners in quick assessments of the terrain for infrastructural developments, which otherwise need time-consuming traditional surveys using theodolite or a total station.

2021 ◽  
pp. 875697282199994
Author(s):  
Joseph F. Hair ◽  
Marko Sarstedt

Most project management research focuses almost exclusively on explanatory analyses. Evaluation of the explanatory power of statistical models is generally based on F-type statistics and the R 2 metric, followed by an assessment of the model parameters (e.g., beta coefficients) in terms of their significance, size, and direction. However, these measures are not indicative of a model’s predictive power, which is central for deriving managerial recommendations. We recommend that project management researchers routinely use additional metrics, such as the mean absolute error or the root mean square error, to accurately quantify their statistical models’ predictive power.


2013 ◽  
Vol 30 (8) ◽  
pp. 1757-1765 ◽  
Author(s):  
Sayed-Hossein Sadeghi ◽  
Troy R. Peters ◽  
Douglas R. Cobos ◽  
Henry W. Loescher ◽  
Colin S. Campbell

Abstract A simple analytical method was developed for directly calculating the thermodynamic wet-bulb temperature from air temperature and the vapor pressure (or relative humidity) at elevations up to 4500 m above MSL was developed. This methodology was based on the fact that the wet-bulb temperature can be closely approximated by a second-order polynomial in both the positive and negative ranges in ambient air temperature. The method in this study builds upon this understanding and provides results for the negative range of air temperatures (−17° to 0°C), so that the maximum observed error in this area is equal to or smaller than −0.17°C. For temperatures ≥0°C, wet-bulb temperature accuracy was ±0.65°C, and larger errors corresponded to very high temperatures (Ta ≥ 39°C) and/or very high or low relative humidities (5% < RH < 10% or RH > 98%). The mean absolute error and the root-mean-square error were 0.15° and 0.2°C, respectively.


2021 ◽  
Vol 2 (5) ◽  
pp. 8-13
Author(s):  
Proenza Y. Roger ◽  
Camejo C. José Emilio ◽  
Ramos H. Rubén

The results obtained from the validation of the procedure ‟Quantification of the degradation index of Photovoltaic Grid Connection Systems” are presented, using statistical parameters, which corroborate its accuracy, achieving a coefficient of determination of 0.9896, a percentage of the root of the mean square of the error RMSPE = 1.498% and a percentage of the mean absolute error MAPE = 1.15%, evidencing the precision of the procedure.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ferréol Berendt ◽  
Felipe de Miguel-Diez ◽  
Evelyn Wallor ◽  
Lubomir Blasko ◽  
Tobias Cremer

AbstractWithin the wood supply chain, the measurement of roundwood plays a key role due to its high economic impact. While wood industry mainly processes the solid wood, the bark mostly remains as an industrial by-product. In Central Europe, it is common that the wood is sold over bark but that the price is calculated on a timber volume under bark. However, logs are often measured as stacks and, thus, the volume includes not only the solid wood content but also the bark portion. Mostly, the deduction factors used to estimate the solid wood content are based on bark thickness. The aim of this study was to compare the estimation of bark volume from scaling formulae with the real bark volume, obtained by xylometric technique. Moreover, the measurements were performed using logs under practice conditions and using discs under laboratory conditions. The mean bark volume was 6.9 dm3 and 26.4 cm3 for the Norway spruce logs and the Scots pine discs respectively. Whereas the results showed good performances regarding the root mean square error, the coefficient of determination (R2) and the mean absolute error for the volume estimation of the total volume of discs and logs (over bark), the performances were much lower for the bark volume estimations only.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 174
Author(s):  
Dionisis Margaris ◽  
Dimitris Spiliotopoulos ◽  
Gregory Karagiorgos ◽  
Costas Vassilakis

Collaborative filtering algorithms formulate personalized recommendations for a user, first by analysing already entered ratings to identify other users with similar tastes to the user (termed as near neighbours), and then using the opinions of the near neighbours to predict which items the target user would like. However, in sparse datasets, too few near neighbours can be identified, resulting in low accuracy predictions and even a total inability to formulate personalized predictions. This paper addresses the sparsity problem by presenting an algorithm that uses robust predictions, that is predictions deemed as highly probable to be accurate, as derived ratings. Thus, the density of sparse datasets increases, and improved rating prediction coverage and accuracy are achieved. The proposed algorithm, termed as CFDR, is extensively evaluated using (1) seven widely-used collaborative filtering datasets, (2) the two most widely-used correlation metrics in collaborative filtering research, namely the Pearson correlation coefficient and the cosine similarity, and (3) the two most widely-used error metrics in collaborative filtering, namely the mean absolute error and the root mean square error. The evaluation results show that, by successfully increasing the density of the datasets, the capacity of collaborative filtering systems to formulate personalized and accurate recommendations is considerably improved.


2011 ◽  
Vol 18 (01) ◽  
pp. 71-85
Author(s):  
Fabrizio Cacciafesta

We provide a simple way to visualize the variance and the mean absolute error of a random variable with finite mean. Some application to options theory and to second order stochastic dominance is given: we show, among other, that the "call-put parity" may be seen as a Taylor formula.


2021 ◽  
Author(s):  
FNU SRINIDHI

The research on dye solubility modeling in supercritical carbon dioxide is gaining prominence over the past few decades. A simple and ubiquitous model that is capable of accurately predicting the solubility in supercritical carbon dioxide would be invaluable for industrial and research applications. In this study, we present such a model for predicting dye solubility in supercritical carbon dioxide with ethanol as the co-solvent for a qualitatively diverse sample of eight dyes. A feed forward back propagation - artificial neural network model based on Levenberg-Marquardt algorithm was constructed with seven input parameters for solubility prediction, the network architecture was optimized to be [7-7-1] with mean absolute error, mean square error, root mean square error and Nash-Sutcliffe coefficient to be 0.026, 0.0016, 0.04 and 0.9588 respectively. Further, Pearson-product moment correlation analysis was performed to assess the relative importance of the parameters considered in the ANN model. A total of twelve prevalent semiempirical equations were also studied to analyze their efficiency in correlating to the solubility of the prepared sample. Mendez-Teja model was found to be relatively efficient with root mean square error and mean absolute error to be 0.094 and 0.0088 respectively. Furthermore, Grey relational analysis was performed and the optimum regime of temperature and pressure were identified with dye solubility as the higher the better performance characteristic. Finally, the dye specific crossover ranges were identified by analysis of isotherms and a strategy for class specific selective dye extraction using supercritical CO2 extraction process is proposed.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Ludi Wang ◽  
Wei Zhou ◽  
Ying Xing ◽  
Xiaoguang Zhou

The prevention, evaluation, and treatment of hypertension have attracted increasing attention in recent years. As photoplethysmography (PPG) technology has been widely applied to wearable sensors, the noninvasive estimation of blood pressure (BP) using the PPG method has received considerable interest. In this paper, a method for estimating systolic and diastolic BP based only on a PPG signal is developed. The multitaper method (MTM) is used for feature extraction, and an artificial neural network (ANN) is used for estimation. Compared with previous approaches, the proposed method obtains better accuracy; the mean absolute error is 4.02 ± 2.79 mmHg for systolic BP and 2.27 ± 1.82 mmHg for diastolic BP.


Sign in / Sign up

Export Citation Format

Share Document