scholarly journals xUAVs: Towards Efficient Approximate Computing for UAVs—Low Power Approximate Adders With Single LUT Delay for FPGA-Based Aerial Imaging Optimization

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 102982-102996
Author(s):  
Tuaha Nomani ◽  
Mujahid Mohsin ◽  
Zahid Pervaiz ◽  
Muhammad Shafique
Computation ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 39 ◽  
Author(s):  
Varadarajan Rengaraj ◽  
Michael Lass ◽  
Christian Plessl ◽  
Thomas D. Kühne

In scientific computing, the acceleration of atomistic computer simulations by means of custom hardware is finding ever-growing application. A major limitation, however, is that the high efficiency in terms of performance and low power consumption entails the massive usage of low precision computing units. Here, based on the approximate computing paradigm, we present an algorithmic method to compensate for numerical inaccuracies due to low accuracy arithmetic operations rigorously, yet still obtaining exact expectation values using a properly modified Langevin-type equation.


2021 ◽  
Vol 2070 (1) ◽  
pp. 012135
Author(s):  
K Stella ◽  
T Vinith ◽  
K Sriram ◽  
P Vignesh

Abstract Recent Approximate computing is a change in perspective in energy-effective frameworks plan and activity, in light of the possibility that we are upsetting PC frameworks effectiveness by requesting a lot of precision from them. Curiously, enormous number of utilization areas, like DSP, insights, and AI. Surmised figuring is appropriate for proficient information handling and mistake strong applications, for example, sign and picture preparing, PC vision, AI, information mining and so forth Inexact registering circuits are considered as a promising answer for lessen the force utilization in inserted information preparing. This paper proposes a FPGA execution for a rough multiplier dependent on specific partial part-based truncation multiplier circuits. The presentation of the proposed multiplier is assessed by contrasting the force utilization, the precision of calculation, and the time delay with those of a rough multiplier dependent on definite calculation introduced. The estimated configuration acquired energy effective mode with satisfactory precision. When contrasted with ordinary direct truncation proposed model fundamentally impacts the presentation. Thusly, this novel energy proficient adjusting based inexact multiplier design outflanked another cutthroat model.


2020 ◽  
Vol 28 (10) ◽  
pp. 2210-2222
Author(s):  
Somayeh Rahimipour ◽  
Wameedh Nazar Flayyih ◽  
Noor Ain Kamsani ◽  
Shaiful Jahari Hashim ◽  
Mircea R. Stan ◽  
...  

Author(s):  
Hiroyuki Baba ◽  
Tongxin Yang ◽  
Masahiro Inoue ◽  
Kaori Tajima ◽  
Tomoaki Ukezono ◽  
...  

2022 ◽  
Vol 27 (2) ◽  
pp. 1-16
Author(s):  
Ming Han ◽  
Ye Wang ◽  
Jian Dong ◽  
Gang Qu

One major challenge in deploying Deep Neural Network (DNN) in resource-constrained applications, such as edge nodes, mobile embedded systems, and IoT devices, is its high energy cost. The emerging approximate computing methodology can effectively reduce the energy consumption during the computing process in DNN. However, a recent study shows that the weight storage and access operations can dominate DNN's energy consumption due to the fact that the huge size of DNN weights must be stored in the high-energy-cost DRAM. In this paper, we propose Double-Shift, a low-power DNN weight storage and access framework, to solve this problem. Enabled by approximate decomposition and quantization, Double-Shift can reduce the data size of the weights effectively. By designing a novel weight storage allocation strategy, Double-Shift can boost the energy efficiency by trading the energy consuming weight storage and access operations for low-energy-cost computations. Our experimental results show that Double-Shift can reduce DNN weights to 3.96%–6.38% of the original size and achieve an energy saving of 86.47%–93.62%, while introducing a DNN classification error within 2%.


Sign in / Sign up

Export Citation Format

Share Document