HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork

Page view(s)
184
Checked on Feb 20, 2024
HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork
Title:
HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork
Journal Title:
The 2023 Conference on Empirical Methods in Natural Language Processing
DOI:
Publication URL:
Publication Date:
06 December 2023
Citation:
Do, Giang, et al. "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork." Proceedings of The 2023 Conference on Empirical Methods in Natural Language Processing
Abstract:
By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models. Recent findings suggest that fixing the routers can achieve competitive performance by alleviating the collapsing problem, where all experts eventually learn similar representations. However, this strategy has two key limitations: (i) the policy derived from random routers might be sub-optimal, and (ii) it requires extensive resources during training and evaluation, leading to limited efficiency gains. This work introduces \HyperRout, which dynamically generates the router's parameters through a fixed hypernetwork and trainable embeddings to achieve a balance between training the routers and freezing them to learn an improved routing policy. Extensive experiments across a wide range of tasks demonstrate the superior performance and efficiency gains of \HyperRouter compared to existing routing methods. Our implementation is publicly available at {\url{{https://github.com/giangdip2410/HyperRouter}}}.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation - AI Singapore Programme
Grant Reference no. : AISG2-RP-2021-027
Description:
ISSN:
N/A
Files uploaded:

File Size Format Action
emnlp.pdf 637.57 KB PDF Open