metadata
license: mit
pipeline_tag: image-text-to-text
tags:
- text-generation-inference
Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
If our project helps you, please give us a star β on GitHub and cite our paper!
π° News
- [2024.05.31] π₯ Our code is released!
- [2024.05.25] π₯ Our checkpoints are available now!
- [2024.05.23] π₯ Our paper is released!
π What's Interesting?
Dynamic Mixture of Experts (DynMoE) incorporates (1) a novel gating method that enables each token to automatically determine the number of experts to activate. (2) An adaptive process automatically adjusts the number of experts during training.
Top-Any Gating
Adaptive Training Process
π‘ Model Details
- π€ DynMoE-StableLM is a MoE model with dynamic top-k gating, finetuned on LanguageBind/MoE-LLaVA-StableLM-Stage2.
- π Our DynMoE-StableLM-1.6B has totally 2.9B parameters, but only 1.8B are activated! (average top-k = 1.25)
- β With the DynMoE tuning stage, we can complete training on 8 A100 GPUs within 40 hours.
π Acknowledgement
We are grateful for the following awesome projects:
π License
This project is released under the MIT license as found in the LICENSE file.
βοΈ Citation
@misc{guo2024dynamic,
title={Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models},
author={Yongxin Guo and Zhenglin Cheng and Xiaoying Tang and Tao Lin},
year={2024},
eprint={2405.14297},
archivePrefix={arXiv},
primaryClass={cs.LG}
}