Accuracy Analysis of Point Laser Triangulation Probes Using Simulation

1998 ◽  
Vol 120 (4) ◽  
pp. 736-745 ◽  
Author(s):  
K. B. Smith ◽  
Y. F. Zheng

Point Laser Triangulation (PLT) probes are relatively new noncontact probes being integrated with Coordinate Measuring Machines (CMMs). Two prominent advantages of PLT probes are fast measuring speeds (typically 100 to 1000 times faster than touch probes) and no contact force is required to take measurements making soft or fragile objects measurable. These advantages have motivated the integration of PLT probes onto CMM. However, because the PLT probe is an electro-optics device, many factors related to optics affect its operation, such as sensor-to-surface orientation and surface reflectivity. To study and better understand these error sources, PLT probe models are needed to simulate observed measurement errors. This article presents a new PLT probe model, which simulates observed measurement errors and shows the effects of placement and orientation of internal components. This PLT probe model is a combination of internal component models developed using geometrical optics. The model successfully simulates measurement errors from specular reflection observed experimentally with real PLT probes. The model also allows the parameters of internal components to be studied.

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4949
Author(s):  
Patrick Kienle ◽  
Lorena Batarilo ◽  
Markus Akgül ◽  
Michael H. Köhler ◽  
Kun Wang ◽  
...  

Absolute distance measurement is a field of research with a large variety of applications. Laser triangulation is a well-tested and developed technique using geometric relations to calculate the absolute distance to an object. The advantages of laser triangulation include its simple and cost-effective setup with yet a high achievable accuracy and resolution in short distances. A main problem of the technology is that even small changes of the optomechanical setup, e.g., due to thermal expansion, lead to significant measurement errors. Therefore, in this work, we introduce an optical setup containing only a beam splitter and a mirror, which splits the laser into a measurement beam and a reference beam. The reference beam can then be used to compensate for different error sources, such as laser beam dithering or shifts of the measurement setup due to the thermal expansion of the components. The effectiveness of this setup is proven by extensive simulations and measurements. The compensation setup improves the deviation in static measurements by up to 75%, whereas the measurement uncertainty at a distance of 1 m can be reduced to 85 μm. Consequently, this compensation setup can improve the accuracy of classical laser triangulation devices and make them more robust against changes in environmental conditions.


Robotica ◽  
1995 ◽  
Vol 13 (1) ◽  
pp. 45-53
Author(s):  
Seppo Nissilä ◽  
Juha Kostamovaara

SummaryThe pulsed time-of-flight laser rangefinding technique has been used in many industrial measurement applications, including 3D-coordinate measuring devices, hot surface profilers and mobile robot sensors. Optical fibres, typically 1–10 m in length and 100–400 μm in diameter can be used to guide optical pulses to the separate sensing head of the measurement device. The use of a large multimode fibre may cause problems, however, when aiming at millimetre accuracy, as the construction and adjustment of the optics of the sensor head may affect the transit time linearity and measurement accuracy via multimode dispersion. Environmental effects, such as bending, vibration due to the moving sensing head and temperature, also cause measurement errors. The error sources are studied and characterized in this paper.


2014 ◽  
Vol 6 ◽  
pp. 841526 ◽  
Author(s):  
Xiaoming Chai ◽  
Jin Fan ◽  
Lanchuan Zhou ◽  
Bo Peng

This paper focuses on the telescope gain affected by a multilevel hybrid mechanism for the feed positioning in the five-hundred-meter aperture spherical radio telescope (FAST) project, which is based on the positioning accuracy analysis of the mechanism. First, error model for the whole mechanism is established and its physical meaning is clearly explained. Then two kinds of error sources are mainly considered: geometric errors and structural deformations. The positioning error over the mechanism's workspace is described by an efficient and intuitive approach. As the feed position error will lower the telescope gain, this influence is analyzed in detail. In the end, it is concluded that the design of the mechanism can meet the requirement of the telescope performance.


2009 ◽  
Vol 628-629 ◽  
pp. 179-184
Author(s):  
Wei Hua Ni ◽  
Zheng Qiang Yao ◽  
Jun Tong Xi

A significant amount of research has been performed to explore the result of components tolerances on assembly quality. The paper analyzes the geometrical error sources which affect spindle rotation accuracy of turntable. Then it predicts the spindle rotation accuracy using vector dimension chain which is the most proper method to analyze assembly quality. The method is validated by tests and it could afford theory reference for spindle rotation accuracy analysis. And the computed value suggests the bearings’ rotary precision could be lower to save cost.


2019 ◽  
Vol 8 (3) ◽  
pp. 413-432
Author(s):  
Roger Tourangeau

Abstract This article examines the relationship among different types of nonobservation errors (all of which affect estimates from nonprobability internet samples) and between nonresponse and measurement errors. Both are examples of how different error sources can interact. Estimates from nonprobability samples seem to have more total error than estimates from probability samples, even ones with very low response rates. This finding suggests that the combination of coverage, selection, and nonresponse errors has greater cumulative effects than nonresponse error alone. The probabilities of having internet access, joining an internet panel, and responding to a particular survey request are probably correlated and, as a result, may lead to greater covariances with survey variables than response propensities alone; the biases accentuate one another. With nonresponse and measurement error, the two sources seem more or less uncorrelated, with one exception—those most prone to social desirability bias (those in the undesirable categories) are also less likely to respond. In addition, the propensity for unit nonresponse seems to be related to item nonresponse.


1999 ◽  
Vol 122 (3) ◽  
pp. 582-586 ◽  
Author(s):  
Kevin B. Smith ◽  
Yuan F. Zheng

Point Laser Triangulation (PLT) probes have significant advantages over traditional touch probes. These advantages include throughput and no contact force, which motivate use of PLT probes on Coordinate Measuring Machines (CMMs). This document addresses the problem of extrinsic calibration. We present a precise technique for calibrating a PLT probe to a CMM. This new method uses known information from a localized polyhedron and measurements taken on the polyhedron by the PLT probe to determine the calibration parameters. With increasing interest in applying PLT probes for point measurements in coordinate metrology, such a calibration method is needed. [S1087-1357(00)01703-2]


2010 ◽  
Vol 148-149 ◽  
pp. 299-303
Author(s):  
Yue Peng Chen ◽  
Hong Wei Fu ◽  
Biao Wang

The paper presented a set of PRS-XY hybrid PMT, made up of a set of 3-PRS parallel mechanism and an X-Y table. For advancing the motion accuracy of the PRS-XY type hybrid CNC machine tool, the forward kinematics model and inverse kinematics model have been made, and main error sources affect to machine tool’s precision was established. The relative position of workpiece and the tool of PRS-XY PMT wad discussed, and the coordinate of tool surface was calculated in various process models. The chamfer shape with variation of main error sources was drawn in simulation software, and the processing experiment results prove the correctness of analysis and simulation.


Author(s):  
Américo Scotti ◽  
Márcio Andrade Batista ◽  
Mehdi Eshagh

AbstractPower is an indirect measurand, determined by processing voltage and current analogue signals through calculations. Using arc welding as a case study, the objective of this work was to bring up subsidies for power calculation. Based on the definitions of correlation and covariance in statistics, a mathematical demonstration was developed to point out the difference between the product of two averages (e.g. P = $$\overline{U} x \overline{I}$$ U ¯ x I ¯ ) and the average of the products (e.g. P = ($$\overline{UxI}$$ UxI ¯ ). Complementarily, a brief on U and I waveform distortion sources were discussed, emphasising the difference between signal standard deviations and measurement errors. It was demonstrated that the product of two averages is not the same as the average of the products, unless in specific conditions (when the variables are fully correlated). It was concluded that the statistical correlation can easily flag the interrelation, but if assisted by covariance, these statistics quantify the inaccuracy between approaches. Finally, although the statistics' determination is easy to implement, it is proposed that power should always be calculated as the average of the instantaneous U and I products. It is also proposed that measurement error sources should be observed and mitigated, since they predictably interfere in power calculation accuracy.


2012 ◽  
Vol 44 (3) ◽  
pp. 454-466 ◽  
Author(s):  
Sander P. M. van den Tillaart ◽  
Martijn J. Booij ◽  
Maarten S. Krol

Uncertainties in discharge determination may have serious consequences for hydrological modelling and resulting discharge predictions used for flood forecasting, climate change impact assessment and reservoir operation. The aim of this study is to quantify the effect of discharge errors on parameters and performance of a conceptual hydrological model for discharge prediction applied to two catchments. Six error sources in discharge determination are considered: random measurement errors without autocorrelation; random measurement errors with autocorrelation; systematic relative measurement errors; systematic absolute measurement errors; hysteresis in the discharge–water level relation and effects of an outdated discharge–water level relation. Assuming realistic magnitudes for each error source, results show that systematic errors and an outdated discharge–water level relation have a considerable influence on model performance, while other error sources have a small to negligible effect. The effects of errors on parameters are large if the effects on model performance are large as well and vice versa. Parameters controlling the water balance are influenced by systematic errors and parameters related to the shape of the hydrograph are influenced by random errors. Large effects of discharge errors on model performance and parameters should be taken into account when using discharge predictions for flood forecasting and impact assessment.


2010 ◽  
Vol 10 (9) ◽  
pp. 4145-4165 ◽  
Author(s):  
D. F. Baker ◽  
H. Bösch ◽  
S. C. Doney ◽  
D. O'Brien ◽  
D. S. Schimel

Abstract. We quantify how well column-integrated CO2 measurements from the Orbiting Carbon Observatory (OCO) should be able to constrain surface CO2 fluxes, given the presence of various error sources. We use variational data assimilation to optimize weekly fluxes at a 2°×5° resolution (lat/lon) using simulated data averaged across each model grid box overflight (typically every ~33 s). Grid-scale simulations of this sort have been carried out before for OCO using simplified assumptions for the measurement error. Here, we more accurately describe the OCO measurements in two ways. First, we use new estimates of the single-sounding retrieval uncertainty and averaging kernel, both computed as a function of surface type, solar zenith angle, aerosol optical depth, and pointing mode (nadir vs. glint). Second, we collapse the information content of all valid retrievals from each grid box crossing into an equivalent multi-sounding measurement uncertainty, factoring in both time/space error correlations and data rejection due to clouds and thick aerosols. Finally, we examine the impact of three types of systematic errors: measurement biases due to aerosols, transport errors, and mistuning errors caused by assuming incorrect statistics. When only random measurement errors are considered, both nadir- and glint-mode data give error reductions over the land of ~45% for the weekly fluxes, and ~65% for seasonal fluxes. Systematic errors reduce both the magnitude and spatial extent of these improvements by about a factor of two, however. Improvements nearly as large are achieved over the ocean using glint-mode data, but are degraded even more by the systematic errors. Our ability to identify and remove systematic errors in both the column retrievals and atmospheric assimilations will thus be critical for maximizing the usefulness of the OCO data.


Sign in / Sign up

Export Citation Format

Share Document