ACE-GCN: A Fast Data-driven FPGA Accelerator for GCN Embedding
ACE-GCN is a fast, resource conservative and energy-efficient, FPGA accelerator for graph convolutional embedding with data-drivenqualities, intended for low-power in-place deployment. Our accelerator exploits the inherent qualities of power law distributionexhibited by real-world graphs, such as structural similarity, replication, and features exchangeability. Contrary to other hardwareimplementations of GCN, on which dataset sparsity becomes an issue and is bypassed with multiple optimization techniques, ourarchitecture is designed to take advantage of this very same situation. We implement an innovative hardware architecture, supportedby our “implicit-processing-by-association” concept. The computational relief and consequential acceleration effect come from thepossibility of replacing rather complex convolutional operations for faster LUT-based comparators and automatic convolutionalresult estimations. We are able to transfer computational complexity into storing capacity, under controllable design parameters.ACE-GCN accelerator core operation consists of orderly parading a set of vector-based, sub-graph structures named “types”, linked topre-calculated embeddings, to incoming "sub-graphs-in-observance", denominated SIO in our work, for either their graph embeddingassumption or their unavoidable convolutional processing, decision depending on the level of similarity obtained from a Jaccardfeature-based coefficient. Results demonstrate that our accelerator has a competitive amount of acceleration; depending on datasetand resource target; between 100× to 1600× PyG baseline, coming close to AWB-GCN by 40% to 70% on smaller datasets and evensurpassing AWB-GCN for larger with controllable accuracy loss levels. We further demonstrate the parallelism potentiality of ourapproach by analyzing the effect of storage capacity on the gradual reliving