hardware accelerator
Recently Published Documents


TOTAL DOCUMENTS

596
(FIVE YEARS 227)

H-INDEX

19
(FIVE YEARS 5)

2022 ◽  
Vol 18 (2) ◽  
pp. 1-20
Author(s):  
Yandong Luo ◽  
Panni Wang ◽  
Shimeng Yu

In this article, we propose a hardware accelerator design using ferroelectric transistor (FeFET)-based hybrid precision synapse (HPS) for deep neural network (DNN) on-chip training. The drain erase scheme for FeFET programming is incorporated for both FeFET HPS design and FeFET buffer design. By using drain erase, high-density FeFET buffers can be integrated onchip to store the intermediate input-output activations and gradients, which reduces the energy consuming off-chip DRAM access. Architectural evaluation results show that the energy efficiency could be improved by 1.2× ∼ 2.1×, 3.9× ∼ 6.0× compared to the other HPS-based designs and emerging non-volatile memory baselines, respectively. The chip area is reduced by 19% ∼ 36% compared with designs using SRAM on-chip buffer even though the capacity of FeFET buffer is increased. Besides, by utilizing drain erase scheme for FeFET programming, the chip area is reduced by 11% ∼ 28.5% compared with the designs using body erase scheme.


Impulse and Gaussian are the two most common types of noise that affect digital images due to imperfections in the imaging process, compression, storage and communication. The conventional filtering approaches, however, reduce the image quality in terms of sharpness and resolution while suppressing the effects of noise. In this work, a machine learning-based filtering structure has been proposed preserves the image quality while effectively removing the noise. Specifically, a support vector machine classifier is employed to detect the type of noise affecting each pixel to select an appropriate filter. The choice of filters includes Median and Bilateral filters of different kernel sizes. The classifier is trained using example images with known noise parameters. The proposed filtering structure has been shown to perform better than the conventional approaches in terms of image quality metrics. Moreover, the design has been implemented as a hardware accelerator on an FPGA device using high-level synthesis tools.


2021 ◽  
Vol 12 (1) ◽  
pp. 378
Author(s):  
Enrique Cantó Navarro ◽  
Rafael Ramos Lara ◽  
Mariano López García

This paper describes three different approaches for the implementation of an online signature verification system on a low-cost FPGA. The system is based on an algorithm, which operates on real numbers using the double-precision floating-point IEEE 754 format. The double-precision computations are replaced by simpler formats, without affecting the biometrics performance, in order to permit efficient implementations on low-cost FPGA families. The first approach is an embedded system based on MicroBlaze, a 32-bit soft-core microprocessor designed for Xilinx FPGAs, which can be configured by including a single-precision floating-point unit (FPU). The second implementation attaches a hardware accelerator to the embedded system to reduce the execution time on floating-point vectors. The last approach is a custom computing system, which is built from a large set of arithmetic circuits that replace the floating-point data with a more efficient representation based on fixed-point format. The latter system provides a very high runtime acceleration factor at the expense of using a large number of FPGA resources, a complex development cycle and no flexibility since it cannot be adapted to other biometric algorithms. By contrast, the first system provides just the opposite features, while the second approach is a mixed solution between both of them. The experimental results show that both the hardware accelerator and the custom computing system reduce the execution time by a factor ×7.6 and ×201 but increase the logic FPGA resources by a factor ×2.3 and ×5.2, respectively, in comparison with the MicroBlaze embedded system.


2021 ◽  
Author(s):  
Mengquan Li ◽  
Zhongzhi Yu ◽  
Yongan Zhang ◽  
Yonggan Fu ◽  
Yingyan Lin
Keyword(s):  

2021 ◽  
Author(s):  
Xiangxiang Wei ◽  
Gao-Ming Du ◽  
Xiaolei Wang ◽  
Hongfang Cao ◽  
Shijie Hu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document