scholarly journals An Improved Progressive TIN Densification Filtering Method Considering the Density and Standard Variance of Point Clouds

2018 ◽  
Vol 7 (10) ◽  
pp. 409 ◽  
Author(s):  
Youqiang Dong ◽  
Ximin Cui ◽  
Li Zhang ◽  
Haibin Ai

The progressive TIN (triangular irregular network) densification (PTD) filter algorithm is widely used for filtering point clouds. In the PTD algorithm, the iterative densification parameters become smaller over the entire process of filtering. This leads to the performance—especially the type I errors of the PTD algorithm—being poor for point clouds with high density and standard variance. Hence, an improved PTD filtering algorithm for point clouds with high density and variance is proposed in this paper. This improved PTD method divides the iterative densification process into two stages. In the first stage, the iterative densification process of the PTD algorithm is used, and the two densification parameters become smaller. When the density of points belonging to the TIN is higher than a certain value (in this paper, we define this density as the standard variance intervention density), the iterative densification process moves into the second stage. In the second stage, a new iterative densification strategy based on multi-scales is proposed, and the angle threshold becomes larger. The experimental results show that the improved PTD algorithm can effectively reduce the type I errors and total errors of the DIM point clouds by 7.53% and 4.09%, respectively, compared with the PTD algorithm. Although the type II errors increase slightly in our improved method, the wrongly added objective points have little effect on the accuracy of the generated DSM. In short, our improved PTD method perfects the classical PTD method and offers a better solution for filtering point clouds with high density and standard variance.

2019 ◽  
Vol 8 (4) ◽  
pp. 1849-1853

Nowadays people are interested to avail loans in banks for their needs, but providing loans to all people is not possible to banks, so they are using some measures to identify eligible customers. To measure the performance of categorical variables sensitivity and specificity are widely used in Medical and tangentially in econometrics, after using some measures also if banks provide the loans to the wrong customers whom might not able to repay the loans, and not providing to customers who can repay will lead to the type I errors and type II errors, to minimize these errors, this study explains one, how to know sensitivity is large or small and second to study the bench marks on forecasting the model by Fuzzy analysis based on fuzzy based weights and it is compared with the sensitivity analysis.


2018 ◽  
Vol 10 (12) ◽  
pp. 1996 ◽  
Author(s):  
Linfu Xie ◽  
Qing Zhu ◽  
Han Hu ◽  
Bo Wu ◽  
Yuan Li ◽  
...  

Aerial laser scanning or photogrammetric point clouds are often noisy at building boundaries. In order to produce regularized polygons from such noisy point clouds, this study proposes a hierarchical regularization method for the boundary points. Beginning with detected planar structures from raw point clouds, two stages of regularization are employed. In the first stage, the boundary points of an individual plane are consolidated locally by shifting them along their refined normal vector to resist noise, and then grouped into piecewise smooth segments. In the second stage, global regularities among different segments from different planes are softly enforced through a labeling process, in which the same label represents parallel or orthogonal segments. This is formulated as a Markov random field and solved efficiently via graph cut. The performance of the proposed method is evaluated for extracting 2D footprints and 3D polygons of buildings in metropolitan area. The results reveal that the proposed method is superior to the state-of-art methods both qualitatively and quantitatively in compactness. The simplified polygons could fit the original boundary points with an average residuals of 0.2 m, and in the meantime reduce up to 90% complexities of the edges. The satisfactory performances of the proposed method show a promising potential for 3D reconstruction of polygonal models from noisy point clouds.


2019 ◽  
Vol 100 (10) ◽  
pp. 1987-2007 ◽  
Author(s):  
Thomas Knutson ◽  
Suzana J. Camargo ◽  
Johnny C. L. Chan ◽  
Kerry Emanuel ◽  
Chang-Hoi Ho ◽  
...  

AbstractAn assessment was made of whether detectable changes in tropical cyclone (TC) activity are identifiable in observations and whether any changes can be attributed to anthropogenic climate change. Overall, historical data suggest detectable TC activity changes in some regions associated with TC track changes, while data quality and quantity issues create greater challenges for analyses based on TC intensity and frequency. A number of specific published conclusions (case studies) about possible detectable anthropogenic influence on TCs were assessed using the conventional approach of preferentially avoiding type I errors (i.e., overstating anthropogenic influence or detection). We conclude there is at least low to medium confidence that the observed poleward migration of the latitude of maximum intensity in the western North Pacific is detectable, or highly unusual compared to expected natural variability. Opinion on the author team was divided on whether any observed TC changes demonstrate discernible anthropogenic influence, or whether any other observed changes represent detectable changes. The issue was then reframed by assessing evidence for detectable anthropogenic influence while seeking to reduce the chance of type II errors (i.e., missing or understating anthropogenic influence or detection). For this purpose, we used a much weaker “balance of evidence” criterion for assessment. This leads to a number of more speculative TC detection and/or attribution statements, which we recognize have substantial potential for being false alarms (i.e., overstating anthropogenic influence or detection) but which may be useful for risk assessment. Several examples of these alternative statements, derived using this approach, are presented in the report.


1990 ◽  
Vol 15 (3) ◽  
pp. 237-247 ◽  
Author(s):  
Rand R. Wilcox

Let X and Y be dependent random variables with variances σ2x and σ2y. Recently, McCulloch (1987) suggested a modification of the Morgan-Pitman test of Ho: σ2x=σ2y But, as this paper describes, there are situations where McCulloch’s procedure is not robust. A subsample approach, similar to the Box-Scheffe test, is also considered and found to give conservative results, in terms of Type I errors, for all situations considered, but it yields relatively low power. New results on the Sandvik-Olsson procedure are also described, but the procedure is found to be nonrobust in situations not previously considered, and its power can be low relative to the two other techniques considered here. A modification of the Morgan-Pitman test based on the modified maximum likelihood estimate of a correlation is also considered. This last procedure appears to be robust in situations where the Sandvik-Olsson (1982) and McCulloch procedures are robust, and it can have more power than the Sandvik-Olsson. But it too gives unsatisfactory results in certain situations. Thus, in terms of power, McCulloch’s procedure is found to be best, with the advantage of being simple to use. But, it is concluded that, in terms of controlling both Type I and Type II errors, a satisfactory solution does not yet exist.


1993 ◽  
Vol 76 (2) ◽  
pp. 407-412 ◽  
Author(s):  
Donald W. Zimmerman

This study investigated violations of random sampling and random assignment in data analyzed by nonparametric significance tests. A computer program induced correlations within groups, as well as between groups, and performed one-sample and two-sample versions of the Mann-Whitney-Wilcoxon test on the resulting scores. Nonindependence of observations within groups spuriously inflated the probability of Type I errors and depressed the probability of Type II errors, and nonindependence between groups had the reverse effect. This outcome, which parallels the influence of nonindependence on parametric tests, can be explained by the equivalence of the Mann-Whitney-Wilcoxon test and the Student t test performed on ranks replacing the initial scores.


2021 ◽  
Author(s):  
Antonia Vehlen ◽  
William Standard ◽  
Gregor Domes

Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer’s test conditions, validation is essential with regard to data quality and other factors potentially threatening data validity. In this study, we evaluated the impact of data accuracy and areas of interest (AOIs) size on the classification of simulated gaze data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying data accuracy. As hypothesized, we found that data accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed for falsely classified gaze inside AOIs (Type I errors) and falsely classified gaze outside the predefined AOIs (Type II errors). The results indicate that smaller AOIs generally minimize false classifications as long as data accuracy is good enough. For studies with lower data accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of an increased probability of Type I errors. Proper estimation of data accuracy is therefore essential for making informed decisions regarding the size of AOIs.


2017 ◽  
Author(s):  
Torrin Liddell ◽  
John K. Kruschke

We surveyed all articles in the Journal of Personality and Social Psychology (JPSP), Psychological Science (PS), and the Journal of Experimental Psychology: General (JEP:G) that mentioned the term "Likert," and found that 100% of the articles that analyzed ordinal data did so using a metric model. We present novel evidence that analyzing ordinal data as if they were metric can systematically lead to errors. We demonstrate false alarms (i.e., detecting an effect where none exists, Type~I errors) and failures to detect effects (i.e., loss of power, Type II errors). We demonstrate systematic inversions of effects, for which treating ordinal data as metric indicates the opposite ordering of means than the true ordering of means. We show the same problems --- false alarms, misses, and inversions --- for interactions in factorial designs and for trend analyses in regression. We demonstrate that averaging across multiple ordinal measurements does not solve or even ameliorate these problems. We provide simple graphical explanations of why these mistakes occur. Moreover, we point out that there is no sure-fire way to detect these problems by treating the ordinal values as metric, and instead we advocate use of ordered-probit models (or similar) because they will better describe the data. Finally, although frequentist approaches to some ordered-probit models are available, we use Bayesian methods because of their flexibility in specifying models and their richness and accuracy in providing parameter estimates.


2020 ◽  
Vol 34 (07) ◽  
pp. 11596-11603 ◽  
Author(s):  
Minghua Liu ◽  
Lu Sheng ◽  
Sheng Yang ◽  
Jing Shao ◽  
Shi-Min Hu

3D point cloud completion, the task of inferring the complete geometric shape from a partial point cloud, has been attracting attention in the community. For acquiring high-fidelity dense point clouds and avoiding uneven distribution, blurred details, or structural loss of existing methods' results, we propose a novel approach to complete the partial point cloud in two stages. Specifically, in the first stage, the approach predicts a complete but coarse-grained point cloud with a collection of parametric surface elements. Then, in the second stage, it merges the coarse-grained prediction with the input point cloud by a novel sampling algorithm. Our method utilizes a joint loss function to guide the distribution of the points. Extensive experiments verify the effectiveness of our method and demonstrate that it outperforms the existing methods in both the Earth Mover's Distance (EMD) and the Chamfer Distance (CD).


1992 ◽  
Vol 75 (3) ◽  
pp. 1011-1020 ◽  
Author(s):  
Donald W. Zimmerman ◽  
Richard H. Williams ◽  
Bruno D. Zumbo

A computer-simulation study examined the one-sample Student t test under violation of the assumption of independent sample observations. The probability of Type I errors increased, and the probability of Type II errors decreased, spuriously elevating the entire power function. The magnitude of the change depended on the correlation between pairs of sample values as well as the number of sample values that were pairwise correlated. A modified t statistic, derived from an unbiased estimate of the population variance that assumed only exchangeable random variables instead of independent, identically distributed random variables, effectively corrected for nonindependence for all degrees of correlation and restored the probability of Type I and Type II errors to their usual values.


2009 ◽  
Vol 84 (5) ◽  
pp. 1395-1428 ◽  
Author(s):  
Joseph V. Carcello ◽  
Ann Vanstraelen ◽  
Michael Willenborg

ABSTRACT: We study going-concern (GC) reporting in Belgium to examine the effects associated with a shift toward rules-based audit standards. Beginning in 2000, a major revision in Belgian GC audit standards took effect. Among its changes, auditors must ascertain whether their clients are in compliance with two “financial-juridical criteria” for board of directors' GC disclosures. In a study of a sample of private Belgian companies, we report two major findings. First, there is a decrease in auditor Type II errors, particularly by non-Big 6/5 auditors for their clients that fail both criteria. Second, there is an increase in Type I errors, again particularly for companies that fail both criteria. We also conduct an ex post analysis of the decrease in Type II errors and the increase in Type I errors. Our findings suggest the standard engenders both favorable and unfavorable effects, the net of which depends on the priorities assigned to the affected parties (creditors, auditors, companies, and employees).


Sign in / Sign up

Export Citation Format

Share Document