neural network
Recently Published Documents


(FIVE YEARS 59088)



2022 ◽  
Vol 15 (2) ◽  
pp. 1-34
Tobias Alonso ◽  
Lucian Petrica ◽  
Mario Ruiz ◽  
Jakoba Petri-Koenig ◽  
Yaman Umuroglu ◽  

Customized compute acceleration in the datacenter is key to the wider roll-out of applications based on deep neural network (DNN) inference. In this article, we investigate how to maximize the performance and scalability of field-programmable gate array (FPGA)-based pipeline dataflow DNN inference accelerators (DFAs) automatically on computing infrastructures consisting of multi-die, network-connected FPGAs. We present Elastic-DF, a novel resource partitioning tool and associated FPGA runtime infrastructure that integrates with the DNN compiler FINN. Elastic-DF allocates FPGA resources to DNN layers and layers to individual FPGA dies to maximize the total performance of the multi-FPGA system. In the resulting Elastic-DF mapping, the accelerator may be instantiated multiple times, and each instance may be segmented across multiple FPGAs transparently, whereby the segments communicate peer-to-peer through 100 Gbps Ethernet FPGA infrastructure, without host involvement. When applied to ResNet-50, Elastic-DF provides a 44% latency decrease on Alveo U280. For MobileNetV1 on Alveo U200 and U280, Elastic-DF enables a 78% throughput increase, eliminating the performance difference between these cards and the larger Alveo U250. Elastic-DF also increases operating frequency in all our experiments, on average by over 20%. Elastic-DF therefore increases performance portability between different sizes of FPGA and increases the critical throughput per cost metric of datacenter inference.

2022 ◽  
Vol 73 ◽  
pp. 103444
Samaneh Abbasi ◽  
Meysam Tavakoli ◽  
Hamid Reza Boveiri ◽  
Mohammad Amin Mosleh Shirazi ◽  
Raouf Khayami ◽  

Nguyen Thai Duong ◽  
Nguyen Quang Duy

<span>Adaptive backstepping control based on disturbance observer and neural network for ship nonlinear active fin system is proposed. One disturbance observer is given to observe the disturbances of the system, by this way, the response time is shorten and the negative impact of disturbance and uncertain elements of the system is reduced. In addition, radial basic function neural network (RBFNN) is proposed to approach the unknown elements in the ship nonlinear active fin system, therefor the system can obtain good roll reduction effectiveness and overcome the uncertainties of the model, the designed controller can maintain the ship roll angle at desired value. Finally, the simulation results are given for a supply vessel to verify the successfulness of the proposed controller.</span>

2022 ◽  
Vol 205 ◽  
pp. 107730
Pandia Rajan Jeyaraj ◽  
Siva Prakash Asokan ◽  
Aravind Chellachi Karthiresan

2022 ◽  
Vol 248 ◽  
pp. 106226
Shuxian Wang ◽  
Shengmao Zhang ◽  
Yang Liu ◽  
Jiaze Zhang ◽  
Yongwen Sun ◽  

2022 ◽  
Vol 40 (3) ◽  
pp. 1-30
Zhiwen Xie ◽  
Runjie Zhu ◽  
Kunsong Zhao ◽  
Jin Liu ◽  
Guangyou Zhou ◽  

Cross-lingual entity alignment has attracted considerable attention in recent years. Past studies using conventional approaches to match entities share the common problem of missing important structural information beyond entities in the modeling process. This allows graph neural network models to step in. Most existing graph neural network approaches model individual knowledge graphs (KGs) separately with a small amount of pre-aligned entities served as anchors to connect different KG embedding spaces. However, this characteristic can cause several major problems, including performance restraint due to the insufficiency of available seed alignments and ignorance of pre-aligned links that are useful in contextual information in-between nodes. In this article, we propose DuGa-DIT, a dual gated graph attention network with dynamic iterative training, to address these problems in a unified model. The DuGa-DIT model captures neighborhood and cross-KG alignment features by using intra-KG attention and cross-KG attention layers. With the dynamic iterative process, we can dynamically update the cross-KG attention score matrices, which enables our model to capture more cross-KG information. We conduct extensive experiments on two benchmark datasets and a case study in cross-lingual personalized search. Our experimental results demonstrate that DuGa-DIT outperforms state-of-the-art methods.

2022 ◽  
Vol 2 (1) ◽  
pp. 1-29
Sukrit Mittal ◽  
Dhish Kumar Saxena ◽  
Kalyanmoy Deb ◽  
Erik D. Goodman

Learning effective problem information from already explored search space in an optimization run, and utilizing it to improve the convergence of subsequent solutions, have represented important directions in Evolutionary Multi-objective Optimization (EMO) research. In this article, a machine learning (ML)-assisted approach is proposed that: (a) maps the solutions from earlier generations of an EMO run to the current non-dominated solutions in the decision space ; (b) learns the salient patterns in the mapping using an ML method, here an artificial neural network (ANN); and (c) uses the learned ML model to advance some of the subsequent offspring solutions in an adaptive manner. Such a multi-pronged approach, quite different from the popular surrogate-modeling methods, leads to what is here referred to as the Innovized Progress (IP) operator. On several test and engineering problems involving two and three objectives, with and without constraints, it is shown that an EMO algorithm assisted by the IP operator offers faster convergence behavior, compared to its base version independent of the IP operator. The results are encouraging, pave a new path for the performance improvement of EMO algorithms, and set the motivation for further exploration on more challenging problems.

Sign in / Sign up

Export Citation Format

Share Document