scholarly journals Deploying a Smart Queuing System on Edge with Intel OpenVINO Toolkit

Author(s):  
Rishit Dagli ◽  
Süleyman Eken

Abstract Recent increases in computational power and the development of specialized architecture led to the possibility to perform machine learning, especially inference, on the edge. OpenVINO is a toolkit based on Convolutional Neural Networks that facilitates fast-track development of computer vision algorithms and deep learning neural networks into vision applications, and enables their easy heterogeneous execution across hardware platforms. A smart queue management can be the key to the success of any sector.} In this paper, we focus on edge deployments to make the Smart Queuing System (SQS) accessible by all also providing ability to run it on cheap devices. This gives it the ability to run the queuing system deep learning algorithms on pre-existing computers which a retail store, public transportation facility or a factory may already possess thus considerably reducing the cost of deployment of such a system. SQS demonstrates how to create a video AI solution on the edge. We validate our results by testing it on multiple edge devices namely CPU, Integrated Edge Graphic Processing Unit (iGPU), Vision Processing Unit (VPU) and Field Programmable Gate Arrays (FPGAs). Experimental results show that deploying a SQS on edge is very promising.

2016 ◽  
Vol 25 (04) ◽  
pp. 1650029 ◽  
Author(s):  
Adam Ziebinski ◽  
Stanwlaw Swierc

Currently embedded system designs aim to improve areas such as speed, energy efficiency and the cost of an application. Application-specific instruction set extensions on reconfigurable hardware provide such opportunities. The article presents a new approach for generating soft core processors that are optimized for specific tasks. In this work, we describe an automatic method for selecting custom instructions for generating software core processors that are based on the machine code of the application program. As the result, a soft core processor will contain the logic that is absolutely necessary. This solution requires fewer gates to be synthesized in the field programmable gate arrays (FPGA) and has a potential to increase the speed of the information processing that is performed by the system in the target FPGA. Experiments have confirmed the correct operation of the method that was used. After the reduction mechanism was enabled, the total number of slices blocks that were occupied decreased to 47% of its initial value in the best case for the Xilinx Spartan3 (xc3s200) and the maximum frequency increased approximately 44% in the best case for Xilinx Spartan6 (xc6slx4).


Due to the exponential increase of electronic devices that are connected to the Internet, the amount of data that they produce have grown to the same extent. In order to face the processing of these data, the use of some automatic learning algorithms, also known as Machine Learning, has become widespread. The most popular is the one known as neural networks. These algorithms need a great deal of resources to compute all their operations, and because of that, they have been traditionally implemented in application specific integrated circuits. However, recently there have been a boom in implementations in field programmable gate arrays, also known as FPGAs. These allow greater parallelism in the implementation of the algorithms. Field Programmable Gate Arrays (FPGA) implementation based feature extraction method is proposed in this paper. This particular application is handwritten offline digit recognition. The classification depends on simple 2 layer MultiLayer Perceptron (MLP). The particular feature extraction approach is suitable for execution of FPGA because it is utilized with subtraction and addition operations. From Standard database handwritten digit images of normalized 40×40 pixel the features are extracted by the proposed method. It has been discovered by experiential outcomes that 85% accuracy is achieved by proposed system. Overall, as compared to other systems, it is less complex, more accurate and simple. Further this project explains IEE-754 format single precision floating point MAC unit’s FPGA implementation which is utilized for feeding the neurons weighted inputs in artificial neural networks. Data representation range is improved by floating point numbers utilization to a higher number from smaller number that is highly suggested for Artificial Neuron Network. The code is developed in HDL, simulated and synthesis results are extracted using Xilinx synthesis tools .In order to validate its computational accuracy of the FFT, an MATLAB validation script is used to verify the output of HDL with standard reference model.


Electronics ◽  
2019 ◽  
Vol 8 (7) ◽  
pp. 803 ◽  
Author(s):  
Deguang Wang ◽  
Junzhong Shen ◽  
Mei Wen ◽  
Chunyuan Zhang

Three-dimensional (3D) deconvolution is widely used in many computer vision applications. However, most previous works have only focused on accelerating two-dimensional (2D) deconvolutional neural networks (DCNNs) on Field-Programmable Gate Arrays (FPGAs), while the acceleration of 3D DCNNs has not been well studied in depth as they have higher computational complexity and sparsity than 2D DCNNs. In this paper, we focus on the acceleration of both 2D and 3D sparse DCNNs on FPGAs by proposing efficient schemes for mapping 2D and 3D sparse DCNNs on a uniform architecture. Firstly, a pruning method is used to prune unimportant network connections and increase the sparsity of weights. After being pruned, the number of parameters of DCNNs is reduced significantly without accuracy loss. Secondly, the remaining non-zero weights are encoded in coordinate (COO) format, reducing the memory demands of parameters. Finally, to demonstrate the effectiveness of our work, we implement our accelerator design on the Xilinx VC709 evaluation platform for four real-life 2D and 3D DCNNs. After the first two steps, the storage required of DCNNs is reduced up to 3.9×. Results show that the performance of our method on the accelerator outperforms that of the our prior work by 2.5× to 3.6× in latency.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1823
Author(s):  
Tomyslav Sledevič ◽  
Artūras Serackis

The convolutional neural networks (CNNs) are a computation and memory demanding class of deep neural networks. The field-programmable gate arrays (FPGAs) are often used to accelerate the networks deployed in embedded platforms due to the high computational complexity of CNNs. In most cases, the CNNs are trained with existing deep learning frameworks and then mapped to FPGAs with specialized toolflows. In this paper, we propose a CNN core architecture called mNet2FPGA that places a trained CNN on a SoC FPGA. The processing system (PS) is responsible for convolution and fully connected core configuration according to the list of prescheduled instructions. The programmable logic holds cores of convolution and fully connected layers. The hardware architecture is based on the advanced extensible interface (AXI) stream processing with simultaneous bidirectional transfers between RAM and the CNN core. The core was tested on a cost-optimized Z-7020 FPGA with 16-bit fixed-point VGG networks. The kernel binarization and merging with the batch normalization layer were applied to reduce the number of DSPs in the multi-channel convolutional core. The convolutional core processes eight input feature maps at once and generates eight output channels of the same size and composition at 50 MHz. The core of the fully connected (FC) layer works at 100 MHz with up to 4096 neurons per layer. In a current version of the CNN core, the size of the convolutional kernel is fixed to 3×3. The estimated average performance is 8.6 GOPS for VGG13 and near 8.4 GOPS for VGG16/19 networks.


2005 ◽  
Vol 14 (02) ◽  
pp. 347-366 ◽  
Author(s):  
HAIDAR M. HARMANANI ◽  
RONY SALIBA

This paper presents an evolutionary algorithm to solve the datapath allocation problem in high-level synthesis. The method performs allocation of functional units, registers, and multiplexers in addition to controller synthesis with the objective of minimizing the cost of hardware resources. The system handles multicycle functional units as well as structural pipelining. The proposed method was implemented using C++ on a Linux workstation. We tested our method on a set of high-level synthesis benchmarks, all yielding good solutions in a short time. An integration path to Field Programmable Gate Arrays (FPGAs) is provided through VHDL.


2021 ◽  
Vol 7 (10) ◽  
pp. 210
Author(s):  
Cristian Sestito ◽  
Fanny Spagnolo ◽  
Stefania Perri

Nowadays, computer vision relies heavily on convolutional neural networks (CNNs) to perform complex and accurate tasks. Among them, super-resolution CNNs represent a meaningful example, due to the presence of both convolutional (CONV) and transposed convolutional (TCONV) layers. While the former exploit multiply-and-accumulate (MAC) operations to extract features of interest from incoming feature maps (fmaps), the latter perform MACs to tune the spatial resolution of the received fmaps properly. The ever-growing real-time and low-power requirements of modern computer vision applications represent a stimulus for the research community to investigate the deployment of CNNs on well-suited hardware platforms, such as field programmable gate arrays (FPGAs). FPGAs are widely recognized as valid candidates for trading off computational speed and power consumption, thanks to their flexibility and their capability to also deal with computationally intensive models. In order to reduce the number of operations to be performed, this paper presents a novel hardware-oriented algorithm able to efficiently accelerate both CONVs and TCONVs. The proposed strategy was validated by employing it within a reconfigurable hardware accelerator purposely designed to adapt itself to different operating modes set at run-time. When characterized using the Xilinx XC7K410T FPGA device, the proposed accelerator achieved a throughput of up to 2022.2 GOPS and, in comparison to state-of-the-art competitors, it reached an energy efficiency up to 2.3 times higher, without compromising the overall accuracy.


Sign in / Sign up

Export Citation Format

Share Document