reconfigurable architecture
Recently Published Documents


TOTAL DOCUMENTS

698
(FIVE YEARS 61)

H-INDEX

24
(FIVE YEARS 3)

Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 14
Author(s):  
Griselda González-Díaz_Conti ◽  
Javier Vázquez-Castillo ◽  
Omar Longoria-Gandara ◽  
Alejandro Castillo-Atoche ◽  
Roberto Carrasco-Alvarez ◽  
...  

Today, embedded systems (ES) tend towards miniaturization and the carrying out of complex tasks in applications such as the Internet of Things, medical systems, telecommunications, among others. Currently, ES structures based on artificial intelligence using hardware neural networks (HNNs) are becoming more common. In the design of HNN, the activation function (AF) requires special attention due to its impact on the HNN performance. Therefore, implementing activation functions (AFs) with good performance, low power consumption, and reduced hardware resources is critical for HNNs. In light of this, this paper presents a hardware-based activation function-core (AFC) to implement an HNN. In addition, this work shows a design framework for the AFC that applies a piecewise polynomial approximation (PPA) technique. The designed AFC has a reconfigurable architecture with a wordlength-efficient decoder, i.e., reduced hardware resources are used to satisfy the desired accuracy. Experimental results show a better performance of the proposed AFC in terms of hardware resources and power consumption when it is compared with state of the art implementations. Finally, two case studies were implemented to corroborate the AFC performance in widely used ANN applications.


2021 ◽  
Author(s):  
Liqiang Lu ◽  
Yicheng Jin ◽  
Hangrui Bi ◽  
Zizhang Luo ◽  
Peng Li ◽  
...  

2021 ◽  
Vol 26 (5) ◽  
pp. 724-735
Author(s):  
Naijin Chen ◽  
Zhen Wang ◽  
Ruixiang He ◽  
Jianhui Jiang ◽  
Fei Cheng ◽  
...  

2021 ◽  
Vol 11 (3) ◽  
pp. 32
Author(s):  
Hasan Irmak ◽  
Federico Corradi ◽  
Paul Detterer ◽  
Nikolaos Alachiotis ◽  
Daniel Ziener

This work presents a dynamically reconfigurable architecture for Neural Network (NN) accelerators implemented in Field-Programmable Gate Array (FPGA) that can be applied in a variety of application scenarios. Although the concept of Dynamic Partial Reconfiguration (DPR) is increasingly used in NN accelerators, the throughput is usually lower than pure static designs. This work presents a dynamically reconfigurable energy-efficient accelerator architecture that does not sacrifice throughput performance. The proposed accelerator comprises reconfigurable processing engines and dynamically utilizes the device resources according to model parameters. Using the proposed architecture with DPR, different NN types and architectures can be realized on the same FPGA. Moreover, the proposed architecture maximizes throughput performance with design optimizations while considering the available resources on the hardware platform. We evaluate our design with different NN architectures for two different tasks. The first task is the image classification of two distinct datasets, and this requires switching between Convolutional Neural Network (CNN) architectures having different layer structures. The second task requires switching between NN architectures, namely a CNN architecture with high accuracy and throughput and a hybrid architecture that combines convolutional layers and an optimized Spiking Neural Network (SNN) architecture. We demonstrate throughput results from quickly reprogramming only a tiny part of the FPGA hardware using DPR. Experimental results show that the implemented designs achieve a 7× faster frame rate than current FPGA accelerators while being extremely flexible and using comparable resources.


2021 ◽  
pp. 2102023
Author(s):  
Chao Nan Zhu ◽  
Tianwen Bai ◽  
Hu Wang ◽  
Jun Ling ◽  
Feihe Huang ◽  
...  

2021 ◽  
Vol 3 (4) ◽  
Author(s):  
Nupur Jain ◽  
Biswajit Mishra ◽  
Peter Wilson

AbstractA new reconfigurable architecture for biomedical applications is presented in this paper. The architecture targets frequently encountered functions in biomedical signal processing algorithms thereby replacing multiple dedicated accelerators and reports low gate count. An optimized implementation is achieved by mapping methodologies to functions and limiting the required memory leading directly to an overall minimization of gate count. The proposed architecture has a simple configuration scheme with special provision for handling feedback. The effectiveness of the architecture is demonstrated on an FPGA to show implementation schemes for multiple DSP functions. The architecture has gate count of $$\approx$$ ≈ 25k and an operating frequency of 46.9 MHz.


Sign in / Sign up

Export Citation Format

Share Document