inherent error
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 12)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
Yan Xu ◽  
Junyi Lin ◽  
Jianping Shi ◽  
Guofeng Zhang ◽  
Xiaogang Wang ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2917
Author(s):  
Padmanabhan Balasubramanian ◽  
Raunaq Nayar ◽  
Douglas L. Maskell

Approximate or inaccurate addition is found to be viable for practical applications which have an inherent error tolerance. Approximate addition is realized using an approximate adder, and many approximate adder designs have been put forward in the literature targeting an acceptable trade-off between quality of results and savings in design metrics compared to the accurate adder. Approximate adders can be classified into three categories as: (a) suitable for FPGA implementation, (b) suitable for ASIC type implementation, and (c) suitable for FPGA and ASIC type implementations. Among these, approximate adders, which are suitable for FPGA and ASIC type implementations are particularly interesting given their versatility and they are typically designed at the gate level. Depending on the way approximation is built into an approximate adder, approximate adders can be classified into two kinds as static approximate adders and dynamic approximate adders. This paper compares and analyzes static approximate adders which are suitable for both FPGA and ASIC type implementations. We consider many static approximate adders and evaluate their performance for a digital image processing application using standard figures of merit such as peak signal to noise ratio and structural similarity index metric. We provide the error metrics of approximate adders, and the design metrics of accurate and approximate adders corresponding to FPGA and ASIC type implementations. For the FPGA implementation, we considered a Xilinx Artix-7 FPGA, and for an ASIC type implementation, we considered a 32/28 nm CMOS standard digital cell library. While the inferences from this work could serve as a useful reference to determine an optimum static approximate adder for a practical application, in particular, we found approximate adders HOAANED, HERLOA and M-HERLOA to be preferable.


2021 ◽  
Vol 20 (5) ◽  
pp. 1-21
Author(s):  
Vasileios Leon ◽  
Theodora Paparouni ◽  
Evangelos Petrongonas ◽  
Dimitrios Soudris ◽  
Kiamal Pekmestzi

Approximate computing has emerged as a promising design alternative for delivering power-efficient systems and circuits by exploiting the inherent error resiliency of numerous applications. The current article aims to tackle the increased hardware cost of floating-point multiplication units, which prohibits their usage in embedded computing. We introduce AFMU (Approximate Floating-point MUltiplier), an area/power-efficient family of multipliers, which apply two approximation techniques in the resource-hungry mantissa multiplication and can be seamlessly extended to support dynamic configuration of the approximation levels via gating signals. AFMU offers large accuracy configuration margins, provides negligible logic overhead for dynamic configuration, and detects unexpected results that may arise due to the approximations. Our evaluation shows that AFMU delivers energy gains in the range 3.6%–53.5% for half-precision and 37.2%–82.4% for single-precision, in exchange for mean relative error around 0.05%–3.33% and 0.01%–2.20%, respectively. In comparison with state-of-the-art multipliers, AFMU exhibits up to 4–6× smaller error on average while delivering more energy-efficient computing. The evaluation in image processing shows that AFMU provides sufficient quality of service, i.e., more than 50 db PSNR and near 1 SSIM values, and up to 57.4% power reduction. When used in floating-point CNNs, the accuracy loss is small (or zero), i.e., up to 5.4% for MNIST and CIFAR-10, in exchange for up to 63.8% power gain.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jianming Jiang ◽  
Ting Feng ◽  
Caixia Liu

In order to improve the prediction performance of the existing nonlinear grey Bernoulli model and extend its applicable range, an improved nonlinear grey Bernoulli model is presented by using a grey modeling technique and optimization methods. First, the traditional whitening equation of nonlinear grey Bernoulli model is transformed into its linear formulae. Second, improved structural parameters of the model are proposed to eliminate the inherent error caused by the leap jumping from the differential equation to the difference one. As a result, an improved nonlinear grey Bernoulli model is obtained. Finally, the structural parameters of the model are calculated by the whale optimization algorithm. The numerical results of several examples show that the presented model’s prediction accuracy is higher than that of the existing models, and the proposed model is more suitable for these practical cases.


2021 ◽  
pp. 209660832199129
Author(s):  
Cheng Zhou

This article examines how scientists from various periods, countries and disciplines worked together to identify the tobacco mosaic virus, which is a rod-shaped protein–RNA complex. The process by which that came about was one of persistent and reflective collective learning. Examination of this history reveals that under conditions in which repeated experiments were unable to be properly performed, and no adequate queries from within the scientific community or communications among scientists were available, highly uncertain novel phenomena were susceptible to inaccurate interpretations and predictions. In view of that, it is suggested that only by encouraging queries within the scientific community and embracing alternative opinions can differences in scientific understanding be calibrated to increase the number of fundamental scientific and technological innovations. Only by promoting academic democracy and establishing dialogue on the basis of equality can we prevent substantial deviations in scientific understanding, proposed by scientific authorities, from inhibiting scientific development. With respect to China, this article holds that, in the age of the internet and the dawning era of 5G, Chinese scientists ought to recognise and confront the limitations of scientific research, examine the cumulative nature of scientific understanding in a more general way, establish a dialogue mechanism among researchers on the basis of equality, allow for the inherent error-correction mechanism within the scientific community, and actively take part in the construction of scientific culture.


2020 ◽  
Vol 6 (4) ◽  
pp. 57-65
Author(s):  
Yury Penskikh

Fundamentals of the spherical harmonic analysis (SHA) of the geomagnetic field were created by Gauss. They acquired the classical Chapman — Schmidt form in the first half of the XXth century. The SHA method was actively developed for domestic geomagnetology by IZMIRAN, and then, since the start of the space age, by ISTP SB RAS, where SHA became the basis for a comprehensive method of MIT (magnetogram inversion technique). SHA solves the inverse problem of potential theory and calculates sources of geomagnetic field variations (GFV) - internal and external electric currents. The SHA algorithm forms a system of linear equations (SLE), which consists of 3K equations (three components of the geomagnetic field, K is the number of ground magnetic stations). Small changes in the left and (or) right side of such SLE can lead to a significant change in unknown variables. As a result, two consecutive instants of time with almost identical GFV are approximated by significantly different SHA coefficients. This contradicts both logic and real observations of the geomagnetic field. The inherent error of magnetometers, as well as the method for determining GFV, also entails the instability of SLE solution. To solve such SLEs optimally, the method of maximum contribution (MMC) was developed at ISTP SB RAS half a century ago. This paper presents basics of the original method and proposes a number of its modifications that increase the accuracy and (or) speed of solving the SLEs. The advantage of MMC over other popular methods is shown, especially for the Southern Hemisphere of Earth.


2020 ◽  
Vol 6 (4) ◽  
pp. 67-76
Author(s):  
Yury Penskikh

Fundamentals of the spherical harmonic analysis (SHA) of the geomagnetic field were created by Gauss. They acquired the classical Chapman — Schmidt form in the first half of the XXth century. The SHA method was actively developed for domestic geomagnetology by IZMIRAN, and then, since the start of the space age, by ISTP SB RAS, where SHA became the basis for a comprehensive method of MIT (magnetogram inversion technique). SHA solves the inverse problem of potential theory and calculates sources of geomagnetic field variations (GFV) - internal and external electric currents. The SHA algorithm forms a system of linear equations (SLE), which consists of 3K equations (three components of the geomagnetic field, K is the number of ground magnetic stations). Small changes in the left and (or) right side of such SLE can lead to a significant change in unknown variables. As a result, two consecutive instants of time with almost identical GFV are approximated by significantly different SHA coefficients. This contradicts both logic and real observations of the geomagnetic field. The inherent error of magnetometers, as well as the method for determining GFV, also entails the instability of SLE solution. To solve such SLEs optimally, the method of maximum contribution (MMC) was developed at ISTP SB RAS half a century ago. This paper presents basics of the original method and proposes a number of its modifications that increase the accuracy and (or) speed of solving the SLEs. The advantage of MMC over other popular methods is shown, especially for the Southern Hemisphere of Earth.


Author(s):  
Christian Weis ◽  
Christina Gimmler-Dumont ◽  
Matthias Jung ◽  
Norbert Wehn

AbstractMany applications show an inherent error resilience due to their probabilistic behavior. This inherent error resilience can be exploited to reduce the design margin for advanced technology nodes resulting in more energy and area efficient implementation. We present in this chapter a cross-layer approach for efficient reliability management in wireless baseband processing with special emphasis on memories since memories are most susceptible to dependability problems. A multiple-antenna (MIMO) system will be used as design example. Further on we focus on DRAMs (Dynamic Random Access Memories). All today’s computing systems rely on dependable DRAMs. In the future DRAM memories will become more undependable due to further scaling. This has to be counterbalanced with higher refresh rates, which leads to a higher DRAM power consumption. Recent research activities resulted in the concept of “approximate DRAM” to save power and improve performance by lowering the refresh rate or disabling refresh completely. Here, we present a holistic simulation environment for investigations on approximate DRAM and show the impact on error-resilient applications.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1414
Author(s):  
Jaeyoung Park

In this paper, emerging memory devices are investigated for a promising synaptic device of neuromorphic computing. Because the neuromorphic computing hardware requires high memory density, fast speed, and low power as well as a unique characteristic that simulates the function of learning by imitating the process of the human brain, memristor devices are considered as a promising candidate because of their desirable characteristic. Among them, Phase-change RAM (PRAM) Resistive RAM (ReRAM), Magnetic RAM (MRAM), and Atomic Switch Network (ASN) are selected to review. Even if the memristor devices show such characteristics, the inherent error by their physical properties needs to be resolved. This paper suggests adopting an approximate computing approach to deal with the error without degrading the advantages of emerging memory devices.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jin-Gang Jiang ◽  
Yi-Hao Chen ◽  
Lei Wang ◽  
Yong-De Zhang ◽  
Yi Liu ◽  
...  

The abnormal tooth arrangement is one of the most common clinical features of malocclusion which is mainly caused by the tooth root compression malformation. The second sequential loop is mostly used for the adjusting of the abnormal tooth arrangement. Now, the shape devise of orthodontic archwire depends completely on the doctor’s experience and patients’ feedback, this practice is time-consuming, and the treatment effect is unstable. The orthodontic-force of the different parameters of the second sequence loop, including different cross-sectional parameters, material parameters, and characteristic parameters, was compared and simulated for the abnormal condition of root compression deformity. In this paper, the analysis and experimental study on the unidirectional orthodontic-force were carried out. The different parameters of the second sequential loop are analyzed, and the equivalent beam deflection theory is used to analyze the relationship between orthodontic-force and archwire parameters. Based on the structural analysis of the second sequential loop, the device for measuring orthodontic force has been designed. The orthodontic force with different structural characteristics of archwire was compared and was measured. Finally, the correction factor was developed in the unidirectional orthodontic-force forecasting model to eliminate the influence of inherent error. The average relative error rate of the theoretical results of the unidirectional orthodontic-force forecasting model is between 12.6% and 8.75%, which verifies the accuracy of the prediction model.


Sign in / Sign up

Export Citation Format

Share Document