scholarly journals ACE-GCN: A Fast Data-driven FPGA Accelerator for GCN Embedding

2021 ◽  
Vol 14 (4) ◽  
pp. 1-23
Author(s):  
José Romero Hung ◽  
Chao Li ◽  
Pengyu Wang ◽  
Chuanming Shao ◽  
Jinyang Guo ◽  
...  

ACE-GCN is a fast and resource/energy-efficient FPGA accelerator for graph convolutional embedding under data-driven and in-place processing conditions. Our accelerator exploits the inherent power law distribution and high sparsity commonly exhibited by real-world graphs datasets. Contrary to other hardware implementations of GCN, on which traditional optimization techniques are employed to bypass the problem of dataset sparsity, our architecture is designed to take advantage of this very same situation. We propose and implement an innovative acceleration approach supported by our “implicit-processing-by-association” concept, in conjunction with a dataset-customized convolutional operator. The computational relief and consequential acceleration effect arise from the possibility of replacing rather complex convolutional operations for a faster embedding result estimation. Based on a computationally inexpensive and super-expedited similarity calculation, our accelerator is able to decide from the automatic embedding estimation or the unavoidable direct convolution operation. Evaluations demonstrate that our approach presents excellent applicability and competitive acceleration value. Depending on the dataset and efficiency level at the target, between 23× and 4,930× PyG baseline, coming close to AWB-GCN by 46% to 81% on smaller datasets and noticeable surpassing AWB-GCN for larger datasets and with controllable accuracy loss levels. We further demonstrate the unique hardware optimization characteristics of our approach and discuss its multi-processing potentiality.

2021 ◽  
Author(s):  
Jose Romero Hung

ACE-GCN is a fast, resource conservative and energy-efficient, FPGA accelerator for graph convolutional embedding with data-drivenqualities, intended for low-power in-place deployment. Our accelerator exploits the inherent qualities of power law distributionexhibited by real-world graphs, such as structural similarity, replication, and features exchangeability. Contrary to other hardwareimplementations of GCN, on which dataset sparsity becomes an issue and is bypassed with multiple optimization techniques, ourarchitecture is designed to take advantage of this very same situation. We implement an innovative hardware architecture, supportedby our “implicit-processing-by-association” concept. The computational relief and consequential acceleration effect come from thepossibility of replacing rather complex convolutional operations for faster LUT-based comparators and automatic convolutionalresult estimations. We are able to transfer computational complexity into storing capacity, under controllable design parameters.ACE-GCN accelerator core operation consists of orderly parading a set of vector-based, sub-graph structures named “types”, linked topre-calculated embeddings, to incoming "sub-graphs-in-observance", denominated SIO in our work, for either their graph embeddingassumption or their unavoidable convolutional processing, decision depending on the level of similarity obtained from a Jaccardfeature-based coefficient. Results demonstrate that our accelerator has a competitive amount of acceleration; depending on datasetand resource target; between 100× to 1600× PyG baseline, coming close to AWB-GCN by 40% to 70% on smaller datasets and evensurpassing AWB-GCN for larger with controllable accuracy loss levels. We further demonstrate the parallelism potentiality of ourapproach by analyzing the effect of storage capacity on the gradual reliving


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Spyros Stathopoulos ◽  
Loukas Michalas ◽  
Ali Khiat ◽  
Alexantrou Serb ◽  
Themis Prodromakis

AbstractThe emergence of memristor technologies brings new prospects for modern electronics via enabling novel in-memory computing solutions and energy-efficient and scalable reconfigurable hardware implementations. Several competing memristor technologies have been presented with each bearing distinct performance metrics across multi-bit memory capacity, low-power operation, endurance, retention and stability. Application needs however are constantly driving the push towards higher performance, which necessitates the introduction of a standard benchmarking procedure for fair evaluation across distinct key metrics. Here we present an electrical characterisation methodology that amalgamates several testing protocols in an appropriate sequence adapted for memristors benchmarking needs, in a technology-agnostic manner. Our approach is designed to extract information on all aspects of device behaviour, ranging from deciphering underlying physical mechanisms to assessing different aspects of electrical performance and even generating data-driven device-specific models. Importantly, it relies solely on standard electrical characterisation instrumentation that is accessible in most electronics laboratories and can thus serve as an independent tool for understanding and designing new memristive device technologies.


2021 ◽  
Vol 13 (7) ◽  
pp. 168781402110277
Author(s):  
Yankai Hou ◽  
Zhaosheng Zhang ◽  
Peng Liu ◽  
Chunbao Song ◽  
Zhenpo Wang

Accurate estimation of the degree of battery aging is essential to ensure safe operation of electric vehicles. In this paper, using real-world vehicles and their operational data, a battery aging estimation method is proposed based on a dual-polarization equivalent circuit (DPEC) model and multiple data-driven models. The DPEC model and the forgetting factor recursive least-squares method are used to determine the battery system’s ohmic internal resistance, with outliers being filtered using boxplots. Furthermore, eight common data-driven models are used to describe the relationship between battery degradation and the factors influencing this degradation, and these models are analyzed and compared in terms of both estimation accuracy and computational requirements. The results show that the gradient descent tree regression, XGBoost regression, and light GBM regression models are more accurate than the other methods, with root mean square errors of less than 6.9 mΩ. The AdaBoost and random forest regression models are regarded as alternative groups because of their relative instability. The linear regression, support vector machine regression, and k-nearest neighbor regression models are not recommended because of poor accuracy or excessively high computational requirements. This work can serve as a reference for subsequent battery degradation studies based on real-time operational data.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Simone Göttlich ◽  
Sven Spieckermann ◽  
Stephan Stauber ◽  
Andrea Storck

AbstractThe visualization of conveyor systems in the sense of a connected graph is a challenging problem. Starting from communication data provided by the IT system, graph drawing techniques are applied to generate an appealing layout of the conveyor system. From a mathematical point of view, the key idea is to use the concept of stress majorization to minimize a stress function over the positions of the nodes in the graph. Different to the already existing literature, we have to take care of special features inspired by the real-world problems.


Author(s):  
Xiaolong Guo ◽  
Yugang Yu ◽  
Gad Allon ◽  
Meiyan Wang ◽  
Zhentai Zhang

To support the 2021 Manufacturing & Service Operations Management (MSOM) Data-Driven Research Challenge, RiRiShun Logistics (a Haier group subsidiary focusing on logistics service for home appliances) provides MSOM members with logistics operational-level data for data-driven research. This paper provides a detailed description of the data associated with over 14 million orders from 149 clients (the consigners) associated with 4.2 million end consumers (the recipients and end users of the appliances) in China, involving 18,000 stock keeping units operated at 103 warehouses. Researchers are welcomed to develop econometric models, data-driven optimization techniques, analytical models, and algorithm designs by using this data set to address questions suggested by company managers.


2022 ◽  
Vol 54 (9) ◽  
pp. 1-36
Author(s):  
Dylan Chou ◽  
Meng Jiang

Data-driven network intrusion detection (NID) has a tendency towards minority attack classes compared to normal traffic. Many datasets are collected in simulated environments rather than real-world networks. These challenges undermine the performance of intrusion detection machine learning models by fitting machine learning models to unrepresentative “sandbox” datasets. This survey presents a taxonomy with eight main challenges and explores common datasets from 1999 to 2020. Trends are analyzed on the challenges in the past decade and future directions are proposed on expanding NID into cloud-based environments, devising scalable models for large network data, and creating labeled datasets collected in real-world networks.


Sign in / Sign up

Export Citation Format

Share Document