Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

2016 ◽  
Vol 86 ◽  
pp. 132-142 ◽  
Author(s):  
Yong Su ◽  
Qingchuan Zhang ◽  
Xiaohai Xu ◽  
Zeren Gao
1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


2017 ◽  
Vol 16 (3) ◽  
pp. 258-264
Author(s):  
Upendra Kumar Giri ◽  
Anirudh Pradhan

AbstractObjectiveThis study was conducted for establishing inherent uncertainty in the shift determination by X-ray volumetric imaging (XVI) and calculating margins due to this inherent uncertainty using van Herk formula.Material and methodsThe study was performed on the XVI which was cone-beam computed tomography integrated with the Elekta AxesseTM linear accelerator machine having six degree of freedom enabled HexaPOD couch. Penta-Guide phantom was used for inherent translational and rotational shift determination by repeated imaging. The process was repeated 20 times a day without moving the phantom for 30 consecutive working days. The measured shifts were used for margins calculation using van Herk formula.ResultsThe mean standard deviations were calculated as 0·05, 0·05, 0·06 mm in the three translational (x, y and z) and 0·05°, 0·05°, 0·05° in the three rotational axes (about x, y, z). Paired sample t-test was performed between the mean values of translational shifts (x, y, z) and rotational shifts. The systematic errors were found to be 0·03, 0·04 and 0·03 mm while the random errors were 0·05, 0·06 and 0·06 mm in the lateral, cranio-caudal and anterio-posterior directions, respectively. For the rotational shifts, the systematic errors were 0·02, 0·03 and 0·03 and the random errors were 0·06, 0·05 and 0·05 in the pitch, roll and yaw directions, respectively.ConclusionOur study concluded that there was an inherent uncertainty associated with the XVI tools, on the basis of these six-dimensional shifts, margins were calculated and recorded as a baseline for the quality assurance (QA) programme for XVI imaging tools by checking its reproducibility once in a year or after any major maintenance in hardware or upgradation in software. Although the shift determined was of the order of submillimetre order, still that shift had great significance for the image quality control of the XVI tools. Every departments practicing quality radiotherapy with such imaging tools should establish their own baseline value of inherent shifts and margins during the commissioning and must use an important QA protocol for the tools.


2019 ◽  
Vol 13 (1) ◽  
pp. 14
Author(s):  
Hendro Supratikno ◽  
David Premana

Parking is a condition of not moving a vehicle that is temporary because it was abandoned by the driver. Included in the definition of parking is every vehicle that stops at certain places whether stated by traffic signs or not, and not solely for the benefit of raising and / or lowering people and / or goods.Campus 3 Lumajang State Community Academy has facilities and infrastructure prepared by the Lumajang Regency government. However, the parking lots provided cannot accommodate vehicles optimally because of the ratio of the number of vehicles and the area of the parking area that is not appropriate. This is because the area of the parking lot is not analyzed by data error when measuring.Each measurement data is assumed to have errors both systematic errors, random errors, and large errors (blunders), so that in the measurement of parking lots certainly there are errors. From this the authors intend to conduct research to find out how the propagation of systematic errors and the large systematic errors of the area of campus parking lot 3 Lumajang Community Academy.The methods used in this study include preparing materials and tools, making land sketches, decomposing them, determining distances using theodolite, determining land area equations, and finding systematic error propagation. So that the final goal in this study is to find large systematic errors in the parking area of Campus 3 of the Lumajang State Community Academy


Author(s):  
Bin Li ◽  
Xiaowei Bi ◽  
Cheng Peng ◽  
Yong Chen ◽  
Xiaofa Zhao ◽  
...  

Although the Slicing Method (SM) is effective for calculating the volume of point cloud objects (PCOs), it is restricted in terms of applicability and practicability because of a certain contingency and directional defects. The Co-Opposite-Direction Slicing Method (CODSM) proposed in this paper is an improved method for calculating PCO volume by increasing parallel (co-opposite-direction) observation and considering the two-way mean as the result. This method takes full advantage of the mutual offsetting of random errors and the compensation of systematic directional errors, which can effectively overcome (or mitigate) the effect of random errors and reduce the effect of systematic errors in SM. In this paper, two typical objects, a cone model and a stone lion base, are the examples for calculating PCO volume using CODSM. The results show that CODSM has all the inherent advantages of SM and effectively weakens the volatility of random errors and the directionality of systematic errors from SM. Therefore, CODSM is a robust configuration upgrade of SM.


2000 ◽  
Author(s):  
Matthew R. Jones ◽  
Jeffery T. Farmer ◽  
Shawn P. Breeding

Abstract An optical fiber thermometer consists of an optical fiber whose sensing tip is given a metallic coating. The sensing tip of the fiber forms an isothermal cavity, and the emission from this cavity is approximately equal to the emission from a blackbody. Temperature readings are obtained by measuring the spectral radiative flux at the end of the fiber at two wavelengths. The ratio of these measurements is used to infer the temperature at the sensing tip. However, readings from optical fiber thermometers are corrupted by emission from the fiber when extended portions of the probe are exposed to elevated temperatures. This paper describes several ways in which the reading from a second fiber can be used to correct the corrupted temperature measurements. It is shown that two of the correction methods result in significant reductions in the systematic errors. However, these methods are sensitive to random errors, so it is preferable to use a single fiber OFT if the uncertainties in the measurements are large.


1964 ◽  
Vol 47 (2) ◽  
pp. 395-399
Author(s):  
W A Landmann ◽  
M C Worland

Abstract Results of collaborative studies on three nitrate and two nitrite methods were examined by statistical procedures of Youden. The FeCl2 procedure was again found lo be subject to extreme bias. A modified procedure employing m-xylenol gave results which were somewhat improved in precision over previous tests. However, the procedure was still subject to systematic errors and rather large random errors resulting in only fair precision. A direct nitrate method based on color development with brucine was highly variable and unsatisfactory. The colorimetric procedure for nitrite, using Griess reagent, appeared to be relatively free of bias, but had only fair precision, and its usefulness is limited by the high standard deviation. An iodometric procedure, based on liberation of iodine from KI solution by the nitrite and titration with thiosulfate, proved to be quite precise and subject only to small bias, within acceptable limits of procedure. This method was far superior to the colorimetric method and should be adopted as official, first action for dry cure mix and pickle mix.


2017 ◽  
Vol 14 (5) ◽  
pp. 499-506 ◽  
Author(s):  
Marc Buyse ◽  
Pierre Squifflet ◽  
Elisabeth Coart ◽  
Emmanuel Quinaux ◽  
Cornelis JA Punt ◽  
...  

Background/aims Considerable human and financial resources are typically spent to ensure that data collected for clinical trials are free from errors. We investigated the impact of random and systematic errors on the outcome of randomized clinical trials. Methods We used individual patient data relating to response endpoints of interest in two published randomized clinical trials, one in ophthalmology and one in oncology. These randomized clinical trials enrolled 1186 patients with age-related macular degeneration and 736 patients with metastatic colorectal cancer. The ophthalmology trial tested the benefit of pegaptanib for the treatment of age-related macular degeneration and identified a statistically significant treatment benefit, whereas the oncology trial assessed the benefit of adding cetuximab to a regimen of capecitabine, oxaliplatin, and bevacizumab for the treatment of metastatic colorectal cancer and failed to identify a statistically significant treatment difference. We simulated trial results by adding errors that were independent of the treatment group (random errors) and errors that favored one of the treatment groups (systematic errors). We added such errors to the data for the response endpoint of interest for increasing proportions of randomly selected patients. Results Random errors added to up to 50% of the cases produced only slightly inflated variance in the estimated treatment effect of both trials, with no qualitative change in the p-value. In contrast, systematic errors produced bias even for very small proportions of patients with added errors. Conclusion A substantial amount of random errors is required before appreciable effects on the outcome of randomized clinical trials are noted. In contrast, even a small amount of systematic errors can severely bias the estimated treatment effects. Therefore, resources devoted to randomized clinical trials should be spent primarily on minimizing sources of systematic errors which can bias the analyses, rather than on random errors which result only in a small loss in power.


2011 ◽  
Vol 4 (4) ◽  
pp. 5147-5182
Author(s):  
V. A. Velazco ◽  
M. Buchwitz ◽  
H. Bovensmann ◽  
M. Reuter ◽  
O. Schneising ◽  
...  

Abstract. Carbon dioxide (CO2) is the most important man-made greenhouse gas (GHG) that cause global warming. With electricity generation through fossil-fuel power plants now as the economic sector with the largest source of CO2, power plant emissions monitoring has become more important than ever in the fight against global warming. In a previous study done by Bovensmann et al. (2010), random and systematic errors of power plant CO2 emissions have been quantified using a single overpass from a proposed CarbonSat instrument. In this study, we quantify errors of power plant annual emission estimates from a hypothetical CarbonSat and constellations of several CarbonSats while taking into account that power plant CO2 emissions are time-dependent. Our focus is on estimating systematic errors arising from the sparse temporal sampling as well as random errors that are primarily dependent on wind speeds. We used hourly emissions data from the US Environmental Protection Agency (EPA) combined with assimilated and re-analyzed meteorological fields from the National Centers of Environmental Prediction (NCEP). CarbonSat orbits were simulated as a sun-synchronous low-earth orbiting satellite (LEO) with an 828-km orbit height, local time ascending node (LTAN) of 13:30 (01:30 p.m.) and achieves global coverage after 5 days. We show, that despite the variability of the power plant emissions and the limited satellite overpasses, one CarbonSat can verify reported US annual CO2 emissions from large power plants (≥5 Mt CO2 yr−1) with a systematic error of less than ~4.9 % for 50 % of all the power plants. For 90 % of all the power plants, the systematic error was less than ~12.4 %. We additionally investigated two different satellite configurations using a combination of 5 CarbonSats. One achieves global coverage everyday but only samples the targets at fixed local times. The other configuration samples the targets five times at two-hour intervals approximately every 6th day but only achieves global coverage after 5 days. From the statistical analyses, we found, as expected, that the random errors improve by approximately a factor of two if 5 satellites are used. On the other hand, more satellites do not result in a large reduction of the systematic error. The systematic error is somewhat smaller for the CarbonSat constellation configuration achieving global coverage everyday. Finally, we recommend the CarbonSat constellation configuration that achieves daily global coverage.


Sign in / Sign up

Export Citation Format

Share Document