scholarly journals mNet2FPGA: A Design Flow for Mapping a Fixed-Point CNN to Zynq SoC FPGA

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1823
Author(s):  
Tomyslav Sledevič ◽  
Artūras Serackis

The convolutional neural networks (CNNs) are a computation and memory demanding class of deep neural networks. The field-programmable gate arrays (FPGAs) are often used to accelerate the networks deployed in embedded platforms due to the high computational complexity of CNNs. In most cases, the CNNs are trained with existing deep learning frameworks and then mapped to FPGAs with specialized toolflows. In this paper, we propose a CNN core architecture called mNet2FPGA that places a trained CNN on a SoC FPGA. The processing system (PS) is responsible for convolution and fully connected core configuration according to the list of prescheduled instructions. The programmable logic holds cores of convolution and fully connected layers. The hardware architecture is based on the advanced extensible interface (AXI) stream processing with simultaneous bidirectional transfers between RAM and the CNN core. The core was tested on a cost-optimized Z-7020 FPGA with 16-bit fixed-point VGG networks. The kernel binarization and merging with the batch normalization layer were applied to reduce the number of DSPs in the multi-channel convolutional core. The convolutional core processes eight input feature maps at once and generates eight output channels of the same size and composition at 50 MHz. The core of the fully connected (FC) layer works at 100 MHz with up to 4096 neurons per layer. In a current version of the CNN core, the size of the convolutional kernel is fixed to 3×3. The estimated average performance is 8.6 GOPS for VGG13 and near 8.4 GOPS for VGG16/19 networks.

Due to the exponential increase of electronic devices that are connected to the Internet, the amount of data that they produce have grown to the same extent. In order to face the processing of these data, the use of some automatic learning algorithms, also known as Machine Learning, has become widespread. The most popular is the one known as neural networks. These algorithms need a great deal of resources to compute all their operations, and because of that, they have been traditionally implemented in application specific integrated circuits. However, recently there have been a boom in implementations in field programmable gate arrays, also known as FPGAs. These allow greater parallelism in the implementation of the algorithms. Field Programmable Gate Arrays (FPGA) implementation based feature extraction method is proposed in this paper. This particular application is handwritten offline digit recognition. The classification depends on simple 2 layer MultiLayer Perceptron (MLP). The particular feature extraction approach is suitable for execution of FPGA because it is utilized with subtraction and addition operations. From Standard database handwritten digit images of normalized 40×40 pixel the features are extracted by the proposed method. It has been discovered by experiential outcomes that 85% accuracy is achieved by proposed system. Overall, as compared to other systems, it is less complex, more accurate and simple. Further this project explains IEE-754 format single precision floating point MAC unit’s FPGA implementation which is utilized for feeding the neurons weighted inputs in artificial neural networks. Data representation range is improved by floating point numbers utilization to a higher number from smaller number that is highly suggested for Artificial Neuron Network. The code is developed in HDL, simulated and synthesis results are extracted using Xilinx synthesis tools .In order to validate its computational accuracy of the FFT, an MATLAB validation script is used to verify the output of HDL with standard reference model.


2021 ◽  
Author(s):  
Rishit Dagli ◽  
Süleyman Eken

Abstract Recent increases in computational power and the development of specialized architecture led to the possibility to perform machine learning, especially inference, on the edge. OpenVINO is a toolkit based on Convolutional Neural Networks that facilitates fast-track development of computer vision algorithms and deep learning neural networks into vision applications, and enables their easy heterogeneous execution across hardware platforms. A smart queue management can be the key to the success of any sector.} In this paper, we focus on edge deployments to make the Smart Queuing System (SQS) accessible by all also providing ability to run it on cheap devices. This gives it the ability to run the queuing system deep learning algorithms on pre-existing computers which a retail store, public transportation facility or a factory may already possess thus considerably reducing the cost of deployment of such a system. SQS demonstrates how to create a video AI solution on the edge. We validate our results by testing it on multiple edge devices namely CPU, Integrated Edge Graphic Processing Unit (iGPU), Vision Processing Unit (VPU) and Field Programmable Gate Arrays (FPGAs). Experimental results show that deploying a SQS on edge is very promising.


Electronics ◽  
2019 ◽  
Vol 8 (7) ◽  
pp. 803 ◽  
Author(s):  
Deguang Wang ◽  
Junzhong Shen ◽  
Mei Wen ◽  
Chunyuan Zhang

Three-dimensional (3D) deconvolution is widely used in many computer vision applications. However, most previous works have only focused on accelerating two-dimensional (2D) deconvolutional neural networks (DCNNs) on Field-Programmable Gate Arrays (FPGAs), while the acceleration of 3D DCNNs has not been well studied in depth as they have higher computational complexity and sparsity than 2D DCNNs. In this paper, we focus on the acceleration of both 2D and 3D sparse DCNNs on FPGAs by proposing efficient schemes for mapping 2D and 3D sparse DCNNs on a uniform architecture. Firstly, a pruning method is used to prune unimportant network connections and increase the sparsity of weights. After being pruned, the number of parameters of DCNNs is reduced significantly without accuracy loss. Secondly, the remaining non-zero weights are encoded in coordinate (COO) format, reducing the memory demands of parameters. Finally, to demonstrate the effectiveness of our work, we implement our accelerator design on the Xilinx VC709 evaluation platform for four real-life 2D and 3D DCNNs. After the first two steps, the storage required of DCNNs is reduced up to 3.9×. Results show that the performance of our method on the accelerator outperforms that of the our prior work by 2.5× to 3.6× in latency.


2021 ◽  
Vol 7 (10) ◽  
pp. 210
Author(s):  
Cristian Sestito ◽  
Fanny Spagnolo ◽  
Stefania Perri

Nowadays, computer vision relies heavily on convolutional neural networks (CNNs) to perform complex and accurate tasks. Among them, super-resolution CNNs represent a meaningful example, due to the presence of both convolutional (CONV) and transposed convolutional (TCONV) layers. While the former exploit multiply-and-accumulate (MAC) operations to extract features of interest from incoming feature maps (fmaps), the latter perform MACs to tune the spatial resolution of the received fmaps properly. The ever-growing real-time and low-power requirements of modern computer vision applications represent a stimulus for the research community to investigate the deployment of CNNs on well-suited hardware platforms, such as field programmable gate arrays (FPGAs). FPGAs are widely recognized as valid candidates for trading off computational speed and power consumption, thanks to their flexibility and their capability to also deal with computationally intensive models. In order to reduce the number of operations to be performed, this paper presents a novel hardware-oriented algorithm able to efficiently accelerate both CONVs and TCONVs. The proposed strategy was validated by employing it within a reconfigurable hardware accelerator purposely designed to adapt itself to different operating modes set at run-time. When characterized using the Xilinx XC7K410T FPGA device, the proposed accelerator achieved a throughput of up to 2022.2 GOPS and, in comparison to state-of-the-art competitors, it reached an energy efficiency up to 2.3 times higher, without compromising the overall accuracy.


Author(s):  
Farida Memon ◽  
Aamir Hussain Memon ◽  
Shahnawaz Talpur ◽  
Fayaz Ahmed Memon ◽  
Rafia Naz Memon

In this paper a novel VHDL design procedure of depth estimation algorithm using HDL (Hardware Description Language) Coder is presented. A framework is developed that takes depth estimation algorithm described in MATLAB as input and generates VHDL code, which dramatically decreases the time required to implement an application on FPGAs (Field Programmable Gate Arrays). In the first phase, design is carriedout in MATLAB. Using HDL Coder, MATLAB floating- point design is converted to an efficient fixed-point design and generated VHDL Code and test-bench from fixed point MATLAB code. Further, the generated VHDL code of design is verified with co-simulation using Mentor Graphic ModelSim10.3d software. Simulation results are presented which indicate that VHDL simulations match with the MATLAB simulations and confirm the efficiency of presented methodology.


10.14311/692 ◽  
2005 ◽  
Vol 45 (2) ◽  
Author(s):  
M. Bečvář ◽  
P. Štukjunger

Arithmetic operations are among the most frequently-used operations in contemporary digital integrated circuits. Various structures have been designed, utilizing different features of IC architectures. Nevertheless, there are very few studies that consider the design of arithmetic operations in Field Programmable Gate Arrays (FPGAs), a re-programmable type of digital integrated circuit. This text compares the results achieved when implementation of basic fixed-point arithmetic units in FPGA. 


Sign in / Sign up

Export Citation Format

Share Document