Approximating Complex Arithmetic Circuits with Guaranteed Worst-Case Relative Error

Author(s):  
Milan Češka ◽  
Milan Češka ◽  
Jiří Matyáš ◽  
Adam Pankuch ◽  
Tomáš Vojnar
Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5586
Author(s):  
Yi-Tun Lin ◽  
Graham D. Finlayson

Spectral reconstruction (SR) algorithms attempt to recover hyperspectral information from RGB camera responses. Recently, the most common metric for evaluating the performance of SR algorithms is the Mean Relative Absolute Error (MRAE)—an ℓ1 relative error (also known as percentage error). Unsurprisingly, the leading algorithms based on Deep Neural Networks (DNN) are trained and tested using the MRAE metric. In contrast, the much simpler regression-based methods (which actually can work tolerably well) are trained to optimize a generic Root Mean Square Error (RMSE) and then tested in MRAE. Another issue with the regression methods is—because in SR the linear systems are large and ill-posed—that they are necessarily solved using regularization. However, hitherto the regularization has been applied at a spectrum level, whereas in MRAE the errors are measured per wavelength (i.e., per spectral channel) and then averaged. The two aims of this paper are, first, to reformulate the simple regressions so that they minimize a relative error metric in training—we formulate both ℓ2 and ℓ1 relative error variants where the latter is MRAE—and, second, we adopt a per-channel regularization strategy. Together, our modifications to how the regressions are formulated and solved leads to up to a 14% increment in mean performance and up to 17% in worst-case performance (measured with MRAE). Importantly, our best result narrows the gap between the regression approaches and the leading DNN model to around 8% in mean accuracy.


2020 ◽  
Vol 2020 (28) ◽  
pp. 264-269
Author(s):  
Yi-Tun Lin ◽  
Graham D. Finlayson

Spectral reconstruction (SR) algorithms attempt to map RGB- to hyperspectral-images. Classically, simple pixel-based regression is used to solve for this SR mapping and more recently patch-based Deep Neural Networks (DNN) are considered (with a modest performance increment). For either method, the 'training' process typically minimizes a Mean-Squared-Error (MSE) loss. Curiously, in recent research, SR algorithms are evaluated and ranked based on a relative percentage error, so-called MeanRelative-Absolute Error (MRAE), which behaves very differently from the MSE loss function. The most recent DNN approaches - perhaps unsurprisingly - directly optimize for this new MRAE error in training so as to match this new evaluation criteria.<br/> In this paper, we show how we can also reformulate pixelbased regression methods so that they too optimize a relative spectral error. Our Relative Error Least-Squares (RELS) approach minimizes an error that is similar to MRAE. Experiments demonstrate that regression models based on RELS deliver better spectral recovery, with up to a 10% increment in mean performance and a 20% improvement in worst-case performance depending on the method.


Ingeniería ◽  
2016 ◽  
Vol 21 (2) ◽  
pp. 138-153 ◽  
Author(s):  
Omar Salazar ◽  
Juan Diego Rojas ◽  
Humberto Serrano

Context: The bottleneck on interval type-2 fuzzy logic systems is the output processing when using Centroid Type-Reduction + Defuzzification (CTR+D method). Nie and Tan proposed an approximation to CTR+D (NT method). Recently, Mendel and Liu improved the NT method (INT method). Numerical examples (due to Mendel and Liu) exhibit the NT and INT methods as good approximations to CTR+D.Method: Normalization to the unit interval of membership function domains (examples and counterexample) and variables involved in the calculations for the three methods. Examples (due to Mendel and Liu) taken from the literature. Counterexample with piecewise linear membership functions. Comparison by means of error and percentage relative error.Results: NT vs. CTR+D: Our counterexample showed an error of 0.1014 and a percentage relative error of 30.53%. This is respectively 23 and 32 times higher than the worst case obtained in the examples. INT vs. CTR+D: Our counterexample showed an error of 0.0725 and a percentage relative error of 21.83%. This is respectively 363 and 546 times higher than the worst case obtained in the examples.Conclusions: NT and INT methods are not necessarily good approximations to the CTR+D method.


Author(s):  
J.D. Geller ◽  
C.R. Herrington

The minimum magnification for which an image can be acquired is determined by the design and implementation of the electron optical column and the scanning and display electronics. It is also a function of the working distance and, possibly, the accelerating voltage. For secondary and backscattered electron images there are usually no other limiting factors. However, for x-ray maps there are further considerations. The energy-dispersive x-ray spectrometers (EDS) have a much larger solid angle of detection that for WDS. They also do not suffer from Bragg’s Law focusing effects which limit the angular range and focusing distance from the diffracting crystal. In practical terms EDS maps can be acquired at the lowest magnification of the SEM, assuming the collimator does not cutoff the x-ray signal. For WDS the focusing properties of the crystal limits the angular range of acceptance of the incident x-radiation. The range is dependent upon the 2d spacing of the crystal, with the acceptance angle increasing with 2d spacing. The natural line width of the x-ray also plays a role. For the metal layered crystals used to diffract soft x-rays, such as Be - O, the minimum magnification is approximately 100X. In the worst case, for the LEF crystal which diffracts Ti - Zn, ˜1000X is the minimum.


2008 ◽  
Author(s):  
Sonia Savelli ◽  
Susan Joslyn ◽  
Limor Nadav-Greenberg ◽  
Queena Chen

Sign in / Sign up

Export Citation Format

Share Document