Guo, Y., Chen, Y., Zou, X., Yang, X., & Gu, Y. (2022). Algorithms and architecture support of degree-based quantization for graph neural networks. Journal of Systems Architecture, 129, 102578. https://doi.org/10.1016/j.sysarc.2022.102578
Abstract:
Recently, graph neural networks (GNNs) have achieved excellent performance on many graph-related tasks. Existing typical GNNs follow the neighborhood aggregation strategy, which updates nodes’ information by aggregating the feature of neighboring nodes. However, these hybrid execution patterns limit their deployment on resource-limited devices. Quantification is an effective technique for deep neural networks (DNNs) inference acceleration, but few studies have considered exploring suitable quantization algorithms for GNNs. In this paper, we propose a degree-based quantization (DBQ) that can identify sensitive nodes in the graph structure. The protective masks are used to ensure that sensitive nodes perform full-precision operations, and the remaining nodes are quantized. In this way, the effect of dynamically changing the precision is achieved to achieve greater acceleration while retaining better classification accuracy. To support DBQ and convert it into performance improvements, we design a new architecture. Elaborate pipelines and specialized optimizations effectively improve inference speed and accuracy. Compared to state-of-the-art GNN accelerators, DBQ gains on 2.4 speedups and improves accuracy by 27.7%.
License type:
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Funding Info:
There was no specific funding for the research done