scholarly journals Calculating Point Cloud Object Volume Using Co-Opposite-Direction Slicing Method

Author(s):  
Bin Li ◽  
Xiaowei Bi ◽  
Cheng Peng ◽  
Yong Chen ◽  
Xiaofa Zhao ◽  
...  

Although the Slicing Method (SM) is effective for calculating the volume of point cloud objects (PCOs), it is restricted in terms of applicability and practicability because of a certain contingency and directional defects. The Co-Opposite-Direction Slicing Method (CODSM) proposed in this paper is an improved method for calculating PCO volume by increasing parallel (co-opposite-direction) observation and considering the two-way mean as the result. This method takes full advantage of the mutual offsetting of random errors and the compensation of systematic directional errors, which can effectively overcome (or mitigate) the effect of random errors and reduce the effect of systematic errors in SM. In this paper, two typical objects, a cone model and a stone lion base, are the examples for calculating PCO volume using CODSM. The results show that CODSM has all the inherent advantages of SM and effectively weakens the volatility of random errors and the directionality of systematic errors from SM. Therefore, CODSM is a robust configuration upgrade of SM.

Author(s):  
Bin Li ◽  
◽  
Xiaowei Bi ◽  
Cheng Peng ◽  
Yong Chen ◽  
...  

Although the Slicing Method (SM) is effective for calculating the volume of point cloud objects (PCOs), it is restricted in terms of applicability and practicability because of certain contingencies and directional defects. The paper proposes the Co-Opposite-Direction Slicing Method (CODSM) an improved method that calculates PCO volume by increasing parallel (co-opposite-direction) observation and considering the two-way mean as the result. The proposed method fullyexploits the mutual offsetting of random errors and the compensation of systematic directional errors, which can effectively overcome (or mitigate) the effect of random errors and reduce the effect of systematic errors in SM. Two typical objects, a cone model and a stone lion base, are used as examples for calculating the PCO volume using CODSM. The results show that CODSM has all the inherent advantages of SM and effectively weakens the volatility of random errors and the directionality of systematic errors from SM, which verifies that CODSM is a robust configuration upgrade of SM.


Author(s):  
Bin Li ◽  
Xiaowei Bi ◽  
Cheng Peng ◽  
Yong Chen ◽  
Chengsheng Yang

1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1819
Author(s):  
Tiandong Shi ◽  
Deyun Zhong ◽  
Liguan Wang

The effect of geological modeling largely depends on the normal estimation results of geological sampling points. However, due to the sparse and uneven characteristics of geological sampling points, the results of normal estimation have great uncertainty. This paper proposes a geological modeling method based on the dynamic normal estimation of sparse point clouds. The improved method consists of three stages: (1) using an improved local plane fitting method to estimate the normals of the point clouds; (2) using an improved minimum spanning tree method to redirect the normals of the point clouds; (3) using an implicit function to construct a geological model. The innovation of this method is an iterative estimation of the point cloud normal. The geological engineer adjusts the normal direction of some point clouds according to the geological law, and then the method uses these correct point cloud normals as a reference to estimate the normals of all point clouds. By continuously repeating the iterative process, the normal estimation result will be more accurate. Experimental results show that compared with the original method, the improved method is more suitable for the normal estimation of sparse point clouds by adjusting normals, according to prior knowledge, dynamically.


2017 ◽  
Vol 16 (3) ◽  
pp. 258-264
Author(s):  
Upendra Kumar Giri ◽  
Anirudh Pradhan

AbstractObjectiveThis study was conducted for establishing inherent uncertainty in the shift determination by X-ray volumetric imaging (XVI) and calculating margins due to this inherent uncertainty using van Herk formula.Material and methodsThe study was performed on the XVI which was cone-beam computed tomography integrated with the Elekta AxesseTM linear accelerator machine having six degree of freedom enabled HexaPOD couch. Penta-Guide phantom was used for inherent translational and rotational shift determination by repeated imaging. The process was repeated 20 times a day without moving the phantom for 30 consecutive working days. The measured shifts were used for margins calculation using van Herk formula.ResultsThe mean standard deviations were calculated as 0·05, 0·05, 0·06 mm in the three translational (x, y and z) and 0·05°, 0·05°, 0·05° in the three rotational axes (about x, y, z). Paired sample t-test was performed between the mean values of translational shifts (x, y, z) and rotational shifts. The systematic errors were found to be 0·03, 0·04 and 0·03 mm while the random errors were 0·05, 0·06 and 0·06 mm in the lateral, cranio-caudal and anterio-posterior directions, respectively. For the rotational shifts, the systematic errors were 0·02, 0·03 and 0·03 and the random errors were 0·06, 0·05 and 0·05 in the pitch, roll and yaw directions, respectively.ConclusionOur study concluded that there was an inherent uncertainty associated with the XVI tools, on the basis of these six-dimensional shifts, margins were calculated and recorded as a baseline for the quality assurance (QA) programme for XVI imaging tools by checking its reproducibility once in a year or after any major maintenance in hardware or upgradation in software. Although the shift determined was of the order of submillimetre order, still that shift had great significance for the image quality control of the XVI tools. Every departments practicing quality radiotherapy with such imaging tools should establish their own baseline value of inherent shifts and margins during the commissioning and must use an important QA protocol for the tools.


2019 ◽  
Vol 13 (1) ◽  
pp. 14
Author(s):  
Hendro Supratikno ◽  
David Premana

Parking is a condition of not moving a vehicle that is temporary because it was abandoned by the driver. Included in the definition of parking is every vehicle that stops at certain places whether stated by traffic signs or not, and not solely for the benefit of raising and / or lowering people and / or goods.Campus 3 Lumajang State Community Academy has facilities and infrastructure prepared by the Lumajang Regency government. However, the parking lots provided cannot accommodate vehicles optimally because of the ratio of the number of vehicles and the area of the parking area that is not appropriate. This is because the area of the parking lot is not analyzed by data error when measuring.Each measurement data is assumed to have errors both systematic errors, random errors, and large errors (blunders), so that in the measurement of parking lots certainly there are errors. From this the authors intend to conduct research to find out how the propagation of systematic errors and the large systematic errors of the area of campus parking lot 3 Lumajang Community Academy.The methods used in this study include preparing materials and tools, making land sketches, decomposing them, determining distances using theodolite, determining land area equations, and finding systematic error propagation. So that the final goal in this study is to find large systematic errors in the parking area of Campus 3 of the Lumajang State Community Academy


Author(s):  
T. O. Chan ◽  
D. D. Lichti

Lamp poles are one of the most abundant highway and community components in modern cities. Their supporting parts are primarily tapered octagonal cones specifically designed for wind resistance. The geometry and the positions of the lamp poles are important information for various applications. For example, they are important to monitoring deformation of aged lamp poles, maintaining an efficient highway GIS system, and also facilitating possible feature-based calibration of mobile LiDAR systems. In this paper, we present a novel geometric model for octagonal lamp poles. The model consists of seven parameters in which a rotation about the z-axis is included, and points are constrained by the trigonometric property of 2D octagons after applying the rotations. For the geometric fitting of the lamp pole point cloud captured by a terrestrial LiDAR, accurate initial parameter values are essential. They can be estimated by first fitting the points to a circular cone model and this is followed by some basic point cloud processing techniques. The model was verified by fitting both simulated and real data. The real data includes several lamp pole point clouds captured by: (1) Faro Focus 3D and (2) Velodyne HDL-32E. The fitting results using the proposed model are promising, and up to 2.9 mm improvement in fitting accuracy was realized for the real lamp pole point clouds compared to using the conventional circular cone model. The overall result suggests that the proposed model is appropriate and rigorous.


2000 ◽  
Author(s):  
Matthew R. Jones ◽  
Jeffery T. Farmer ◽  
Shawn P. Breeding

Abstract An optical fiber thermometer consists of an optical fiber whose sensing tip is given a metallic coating. The sensing tip of the fiber forms an isothermal cavity, and the emission from this cavity is approximately equal to the emission from a blackbody. Temperature readings are obtained by measuring the spectral radiative flux at the end of the fiber at two wavelengths. The ratio of these measurements is used to infer the temperature at the sensing tip. However, readings from optical fiber thermometers are corrupted by emission from the fiber when extended portions of the probe are exposed to elevated temperatures. This paper describes several ways in which the reading from a second fiber can be used to correct the corrupted temperature measurements. It is shown that two of the correction methods result in significant reductions in the systematic errors. However, these methods are sensitive to random errors, so it is preferable to use a single fiber OFT if the uncertainties in the measurements are large.


1964 ◽  
Vol 47 (2) ◽  
pp. 395-399
Author(s):  
W A Landmann ◽  
M C Worland

Abstract Results of collaborative studies on three nitrate and two nitrite methods were examined by statistical procedures of Youden. The FeCl2 procedure was again found lo be subject to extreme bias. A modified procedure employing m-xylenol gave results which were somewhat improved in precision over previous tests. However, the procedure was still subject to systematic errors and rather large random errors resulting in only fair precision. A direct nitrate method based on color development with brucine was highly variable and unsatisfactory. The colorimetric procedure for nitrite, using Griess reagent, appeared to be relatively free of bias, but had only fair precision, and its usefulness is limited by the high standard deviation. An iodometric procedure, based on liberation of iodine from KI solution by the nitrite and titration with thiosulfate, proved to be quite precise and subject only to small bias, within acceptable limits of procedure. This method was far superior to the colorimetric method and should be adopted as official, first action for dry cure mix and pickle mix.


2017 ◽  
Vol 14 (5) ◽  
pp. 499-506 ◽  
Author(s):  
Marc Buyse ◽  
Pierre Squifflet ◽  
Elisabeth Coart ◽  
Emmanuel Quinaux ◽  
Cornelis JA Punt ◽  
...  

Background/aims Considerable human and financial resources are typically spent to ensure that data collected for clinical trials are free from errors. We investigated the impact of random and systematic errors on the outcome of randomized clinical trials. Methods We used individual patient data relating to response endpoints of interest in two published randomized clinical trials, one in ophthalmology and one in oncology. These randomized clinical trials enrolled 1186 patients with age-related macular degeneration and 736 patients with metastatic colorectal cancer. The ophthalmology trial tested the benefit of pegaptanib for the treatment of age-related macular degeneration and identified a statistically significant treatment benefit, whereas the oncology trial assessed the benefit of adding cetuximab to a regimen of capecitabine, oxaliplatin, and bevacizumab for the treatment of metastatic colorectal cancer and failed to identify a statistically significant treatment difference. We simulated trial results by adding errors that were independent of the treatment group (random errors) and errors that favored one of the treatment groups (systematic errors). We added such errors to the data for the response endpoint of interest for increasing proportions of randomly selected patients. Results Random errors added to up to 50% of the cases produced only slightly inflated variance in the estimated treatment effect of both trials, with no qualitative change in the p-value. In contrast, systematic errors produced bias even for very small proportions of patients with added errors. Conclusion A substantial amount of random errors is required before appreciable effects on the outcome of randomized clinical trials are noted. In contrast, even a small amount of systematic errors can severely bias the estimated treatment effects. Therefore, resources devoted to randomized clinical trials should be spent primarily on minimizing sources of systematic errors which can bias the analyses, rather than on random errors which result only in a small loss in power.


Sign in / Sign up

Export Citation Format

Share Document