scholarly journals Methods for improving the accuracy of CIE tristimulus values of object color by calculation Part II: Improvement on measurement wavelength ranges

2021 ◽  
Vol 16 ◽  
pp. 155892502098596
Author(s):  
Yang Hongying ◽  
Zhang Jingjing ◽  
Yang Zhihui ◽  
Zhou Jinli ◽  
Xie Wanzi ◽  
...  

The previous paper (part I) analyzed test errors of the spectrophotometer and their reasons, then systematically investigated the algorithms to reduce measuring bandpass error and intervals error. This paper (part II) focuses on the influence of measurement wavelength ranges and their truncation errors, and some algorithms to overcome the truncation errors. CIE recommends that tristimulus values are calculated over a range of 360–830 nm. However, most spectrophotometers do not meet it. The reduction of measurement range will result in a measurement range error or a truncation error. In this study, five ranges commonly employed in practice are selected for investigating the truncation errors, and three extrapolation methods are used to extend the data to compensate for the measurement range loss. Results are obtained by employing 1301 Munsell color chips under illuminant D65 and CIE 1964 standard observer. For the standard 1-nm intervals, the narrower the range, the larger the truncation error. For the usual-measured 10-nm intervals, bandpass error and intervals error should be handled at the same time, 380–780 nm Table LWL gives the highest accurate outcomes, which even improve the accuracy of the range 360–750 nm to an acceptable level. Whereas, ranges of 360–700 nm and 400–700 nm still need extrapolation to reduce their truncation errors even with Table LWL. Three extrapolation methods of nearest, linear and second-order all reduce the truncation error, but for different ranges, algorithms and illuminants, the optimal method of extrapolation varies.

2011 ◽  
Vol 421 ◽  
pp. 743-749
Author(s):  
Xiao Ming Wu ◽  
Chun Liu

Abstract. The computation of the responses and their design sensitivities play an essential role in structural analysis and optimization. Significant works have been done in this area. Modal method is one of the classical methods. In this study, a new error compensation method is constructed, in which the modal superposition method is hybrid with Epsilon algorithm for responses and their sensitivities analysis of undamped system. In this study the truncation error of modal superposition is expressed by the first L orders eigenvalues and its eigenvectors explicitly. The epsilon algorithm is used to accelerate the convergence of the truncation errors. Numerical examples show that the present method is validity and effectiveness.


In this paper Neville’s process for the repetitive linear combination of numerical estimates is re-examined and exhibited as a process for term-by-term elimination of error, expressed as a power series; this point of view immediately suggests a wide range of applications—other than interpolation, for which the process was originally developed, and which is barely mentioned in this paper—for example, to the evaluation of finite or infinite integrals in one or more variables, to the evaluation of sums, etc. A matrix formulation is also developed, suggesting further extensions, for example, to the evaluation of limits, derivatives, sums of series with alternating signs, and so on. It is seen also that Neville’s process may be readily applied in Romberg Integration; each suggests extensions of the other. Several numerical examples exhibit various applications, and are accompanied by comments on the behaviour of truncation and rounding errors as exhibited in each Neville tableau, to show how these provide evidence of progress in the improvement of the approximation, and internal numerical evidence of the nature of the truncation error. A fuller and more connected account of the behaviour of truncation errors and rounding errors is given in a later section, and suggestions are also made for choosing suitable specific original estimates, i.e. for choosing suitable tabular arguments in the elimination variable, in order to produce results as precise and accurate as possible.


2019 ◽  
Vol 14 ◽  
pp. 155892501989408
Author(s):  
Hongying Yang ◽  
Jingjing Zhang ◽  
Zhihui Yang ◽  
Jinli Zhou ◽  
Wanzi Xie ◽  
...  

Color is one of the most important appearance properties of objects. To digitize color, measuring and calculating tristimulus values are the most basic work besides obtaining reflectance spectrum. However, the accuracy of tristimulus values varies with instruments, measuring, and calculation methods. Textiles and some other application of color demand high color quality due to their special utilization. The series of our studies aim to analyze and evaluate some mathematical solutions in order to improve the accuracy of tristimulus values. The studies include two parts: (1) Part I concentrates on measurement bandpass and intervals and their corresponding improvement algorithms, (2) Part II focuses on the influence of measurement ranges and their truncation errors and some algorithms to overcome the truncation errors. In Part I (current article), measurement errors caused by bandpass and test intervals in the spectrophotometer are analyzed. Then, algorithms including two bandpass corrections (3-point correction and 5-point correction), three interpolations (third-order polynomial interpolation Lagrange and Spline, a fifth-order polynomial interpolation Sprague), two Oleari deconvolution methods (zero- and second-order), and three optimization weighting table methods (ASTM Table 6, Table LLR, and Table LWL) are studied systemically by programming MATLAB software and basing on measuring the spectral reflectance of 1301 chips in Munsell Color Book with Commission Internationale de L’Eclairage (CIE) 1964 color-matching functions and D65 standard illuminant. The results show that all algorithms mentioned above yield very positive effects, and among them, Table LWL performs best with reducing the bandpass error and intervals error to 7‰ of its original error and is recommended.


2021 ◽  
Vol 13 (16) ◽  
pp. 8862
Author(s):  
Jinlin Li ◽  
Lanhui Zhang

The accurate estimation of moisture content in deep soil layers is usually difficult due to the associated costs, strong spatiotemporal variability, and nonlinear relationship between surface and deep moisture content, especially in alpine areas (where complications include extreme heterogeneity and freeze-thaw processes). In an effort to identify the optimal method for this purpose, this study used measurements of soil moisture content at three depths (4, 10, and 20 cm) in the upper parts of the Babao River basin in the Qilian Mountains, Northwest China. These measurements were collected in the HiWATER (Heihe watershed allied telemetry experimental research) program to test four vertical extrapolation methods: exponential filtering (ExpF), linear regression (LR), support vector regression (SVR), and the application of a type of artificial neural network, the radial basis function (RBF). SVR provided the best predictions, in terms of the lowest root mean squared error and mean absolute error values, for the 10 and 20 cm layers from surface layer (4 cm) measurements. However, the data also confirmed that freeze-thawing is an important process in the study area, which makes the infiltration process more complex and highly variable over time. Thus, we compared the vertical extrapolation methods’ performance in each of the four periods with differing infiltration characteristics and found significant among-period differences in each case. However, SVR consistently provided the best estimates, and all methods provided better estimates for the 10 cm layer than for the 20 cm layer.


2010 ◽  
Vol 27 (3) ◽  
pp. 594-603 ◽  
Author(s):  
Mark A. Bourassa ◽  
Kelly McBeth Ford

Abstract A more versatile and robust technique is developed for determining area-averaged surface vorticity based on vector winds from swaths of remotely sensed wind vectors. This technique could also be applied to determine the curl of stress, and it could be applied to any gridded dataset of winds or stresses. The technique is discussed in detail and compared to two previous studies that focused on early development of tropical systems. Error characteristics of the technique are examined in detail. Specifically, three independent sources of error are explored: random observational error, truncation error, and representation error. Observational errors are due to random errors in the wind observations and determined as a worst-case estimate as a function of averaging spatial scale. The observational uncertainty in the Quick Scatterometer (QuikSCAT)-derived vorticity averaged for a roughly circular shape with a 100-km diameter, expressed as one standard deviation, is approximately 0.5 × 10−5 s−1 for the methodology described herein. Truncation error is associated with the assumption of linear changes between wind vectors. Uncertainty related to truncation has more spatial organization in QuikSCAT data than observational uncertainty. On 25- and 50-km scales, the truncation errors are very large. The third type of error, representation error, is due to the size of the area being averaged compared to values with 25-km length scales. This type of error is analogous to oversmoothing. Tropical and subtropical low pressure systems from three months of QuikSCAT observations are used to examine truncation and representation errors. Representation error results in a bias of approximately −1.5 × 10−5 s−1 for area-averaged vorticity calculated on a 100-km scale compared to vorticity calculated on a 25-km scale. The discussion of these errors will benefit future projects of this nature as well as future satellite missions.


SIMULATION ◽  
1964 ◽  
Vol 3 (5) ◽  
pp. 45-52 ◽  
Author(s):  
Michael E. Fisher

The solution of partial differential equations by a differential analyser is considered with regard to the effects of noise, computational instability and the deviation of components from their ideal values. It is shown that the 'serial' method of solving parabolic, hyperbolic and elliptic equations leads to serious in stability which increases as the finite difference interval is reduced. The truncation error (due to the difference approximations) decreases as the interval is made smaller and consequently an 'optimal' ac curacy is reached when the unstable noise errors match the truncation errors. Evaluation shows that the attainable accuracy is severely limited, especially for hyperbolic and elliptic equations. The 'parallel' method is stable when applied to parabolic and hyperbolic (but not elliptic) equations and the attainable accuracy is then limited by the accumulation of component tolerances. Quantitative investigation shows how reasonably high accuracy can be achieved with a minimum of precise adjustments.


2018 ◽  
Vol 41 ◽  
Author(s):  
Patrick Simen ◽  
Fuat Balcı

AbstractRahnev & Denison (R&D) argue against normative theories and in favor of a more descriptive “standard observer model” of perceptual decision making. We agree with the authors in many respects, but we argue that optimality (specifically, reward-rate maximization) has proved demonstrably useful as a hypothesis, contrary to the authors’ claims.


2020 ◽  
Vol 64 (1-4) ◽  
pp. 165-172
Author(s):  
Dongge Deng ◽  
Mingzhi Zhu ◽  
Qiang Shu ◽  
Baoxu Wang ◽  
Fei Yang

It is necessary to develop a high homogeneous, low power consumption, high frequency and small-size shim coil for high precision and low-cost atomic spin gyroscope (ASG). To provide the shim coil, a multi-objective optimization design method is proposed. All structural parameters including the wire diameter are optimized. In addition to the homogeneity, the size of optimized coil, especially the axial position and winding number, is restricted to develop the small-size shim coil with low power consumption. The 0-1 linear programming is adopted in the optimal model to conveniently describe winding distributions. The branch and bound algorithm is used to solve this model. Theoretical optimization results show that the homogeneity of the optimized shim coil is several orders of magnitudes better than the same-size solenoid. A simulation experiment is also conducted. Experimental results show that optimization results are verified, and power consumption of the optimized coil is about half of the solenoid when providing the same uniform magnetic field. This indicates that the proposed optimal method is feasible to develop shim coil for ASG.


2020 ◽  
pp. 3-8
Author(s):  
L.F. Vitushkin ◽  
F.F. Karpeshin ◽  
E.P. Krivtsov ◽  
P.P. Krolitsky ◽  
V.V. Nalivaev ◽  
...  

The State special primary acceleration measurement standard for gravimetry (GET 190-2019), its composition, principle of operation and basic metrological characteristics are presented. This standard is on the upper level of reference for free-fall acceleration measurements. Its accuracy and reliability were improved as a result of optimisation of the adjustment procedures for measurement systems and its integration within the upgraded systems, units and modern hardware components. A special attention was given to adjusting the corrections applied to measurement results with respect to procedural, physical and technical limitations. The used investigation methods made it possibled to confirm the measurement range of GET 190-2019 and to determine the contributions of main sources of errors and the total value of these errors. The measurement characteristics and GET 90-2019 were confirmed by the results obtained from measurements of the absolute value of the free fall acceleration at the gravimetrical site “Lomonosov-1” and by their collation with the data of different dates obtained from measurements by high-precision foreign and domestic gravimeters. Topicality of such measurements ensues from the requirements to handle the applied problems that need data on parameters of the Earth gravitational field, to be adequately faced. Geophysics and navigation are the main fields of application for high-precision measurements in this field.


Liquidity ◽  
2018 ◽  
Vol 1 (1) ◽  
pp. 72-80
Author(s):  
Viva Faronika ◽  
Asriyal Asriyal

If the customer is greater than acceptable level of service, the cutomer is not satisfied. Conversely, if an acceptable level of service greater than the expectations of customers, the customer will be satisfied. This means that if Bank BRI branch Fatmawati can improve service quality to its customer it will affect the level of satisfaction. In this research found evidence that, in terms of the creation of quality services, Bank BRI branch Fatmawati is one of the branches that participate to implement the established policies and service in accordance with the exiting service standard in the banking world. Amount of influence the determination of quality of service policies applied by the Bank BRI branch Fatmawati indicated by r2. r2 value only 45 % and the rest 55 % influenced by other variables not studied. Meanwhile, the variable relationship of service quality to customer satisfaction can be seen from the values r = 0,67. This shows the value of the correlation coefficient between the variables of service quality to customer satisfaction. This means there are strong relationships between the independent variable X (quality of service) to the dependent variable Y (customer satisfaction). Since r = 0,67 (67 %) greater then 50 %.


Sign in / Sign up

Export Citation Format

Share Document