deep neural network
Recently Published Documents


TOTAL DOCUMENTS

6588
(FIVE YEARS 5020)

H-INDEX

61
(FIVE YEARS 28)

2022 ◽  
Vol 15 (2) ◽  
pp. 1-34
Author(s):  
Tobias Alonso ◽  
Lucian Petrica ◽  
Mario Ruiz ◽  
Jakoba Petri-Koenig ◽  
Yaman Umuroglu ◽  
...  

Customized compute acceleration in the datacenter is key to the wider roll-out of applications based on deep neural network (DNN) inference. In this article, we investigate how to maximize the performance and scalability of field-programmable gate array (FPGA)-based pipeline dataflow DNN inference accelerators (DFAs) automatically on computing infrastructures consisting of multi-die, network-connected FPGAs. We present Elastic-DF, a novel resource partitioning tool and associated FPGA runtime infrastructure that integrates with the DNN compiler FINN. Elastic-DF allocates FPGA resources to DNN layers and layers to individual FPGA dies to maximize the total performance of the multi-FPGA system. In the resulting Elastic-DF mapping, the accelerator may be instantiated multiple times, and each instance may be segmented across multiple FPGAs transparently, whereby the segments communicate peer-to-peer through 100 Gbps Ethernet FPGA infrastructure, without host involvement. When applied to ResNet-50, Elastic-DF provides a 44% latency decrease on Alveo U280. For MobileNetV1 on Alveo U200 and U280, Elastic-DF enables a 78% throughput increase, eliminating the performance difference between these cards and the larger Alveo U250. Elastic-DF also increases operating frequency in all our experiments, on average by over 20%. Elastic-DF therefore increases performance portability between different sizes of FPGA and increases the critical throughput per cost metric of datacenter inference.


2022 ◽  
Vol 73 ◽  
pp. 103444
Author(s):  
Samaneh Abbasi ◽  
Meysam Tavakoli ◽  
Hamid Reza Boveiri ◽  
Mohammad Amin Mosleh Shirazi ◽  
Raouf Khayami ◽  
...  

2022 ◽  
Vol 205 ◽  
pp. 107730
Author(s):  
Pandia Rajan Jeyaraj ◽  
Siva Prakash Asokan ◽  
Aravind Chellachi Karthiresan

2022 ◽  
Vol 18 (2) ◽  
pp. 1-20
Author(s):  
Yandong Luo ◽  
Panni Wang ◽  
Shimeng Yu

In this article, we propose a hardware accelerator design using ferroelectric transistor (FeFET)-based hybrid precision synapse (HPS) for deep neural network (DNN) on-chip training. The drain erase scheme for FeFET programming is incorporated for both FeFET HPS design and FeFET buffer design. By using drain erase, high-density FeFET buffers can be integrated onchip to store the intermediate input-output activations and gradients, which reduces the energy consuming off-chip DRAM access. Architectural evaluation results show that the energy efficiency could be improved by 1.2× ∼ 2.1×, 3.9× ∼ 6.0× compared to the other HPS-based designs and emerging non-volatile memory baselines, respectively. The chip area is reduced by 19% ∼ 36% compared with designs using SRAM on-chip buffer even though the capacity of FeFET buffer is increased. Besides, by utilizing drain erase scheme for FeFET programming, the chip area is reduced by 11% ∼ 28.5% compared with the designs using body erase scheme.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-28
Author(s):  
Sajib Mistry ◽  
Lie Qu ◽  
Athman Bouguettaya

We propose a novel generic reputation bootstrapping framework for composite services. Multiple reputation-related indicators are considered in a layer-based framework to implicitly reflect the reputation of the component services. The importance of an indicator on the future performance of a component service is learned using a modified Random Forest algorithm. We propose a topology-aware Forest Deep Neural Network (fDNN) to find the correlations between the reputation of a composite service and reputation indicators of component services. The trained fDNN model predicts the reputation of a new composite service with the confidence value. Experimental results with real-world dataset prove the efficiency of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document