Edit model card

Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models

If our project helps you, please give us a star ⭐ on GitHub and cite our paper!

πŸ“° News

  • [2024.05.31] πŸ”₯ Our code is released!
  • [2024.05.25] πŸ”₯ Our checkpoints are available now!
  • [2024.05.23] πŸ”₯ Our paper is released!

😎 What's Interesting?

Dynamic Mixture of Experts (DynMoE) incorporates (1) a novel gating method that enables each token to automatically determine the number of experts to activate. (2) An adaptive process automatically adjusts the number of experts during training.

Top-Any Gating

Adaptive Training Process

πŸ’‘ Model Details

  • πŸ€” DynMoE-StableLM is a MoE model with dynamic top-k gating, finetuned on LanguageBind/MoE-LLaVA-StableLM-Stage2.
  • πŸš€ Our DynMoE-StableLM-1.6B has totally 2.9B parameters, but only 1.8B are activated! (average top-k = 1.25)
  • βŒ› With the DynMoE tuning stage, we can complete training on 8 A100 GPUs within 40 hours.

πŸ‘ Acknowledgement

We are grateful for the following awesome projects:

πŸ”’ License

This project is released under the Apache-2.0 license as found in the LICENSE file.

✏️ Citation

@misc{guo2024dynamic,
      title={Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models}, 
      author={Yongxin Guo and Zhenglin Cheng and Xiaoying Tang and Tao Lin},
      year={2024},
      eprint={2405.14297},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Downloads last month
17
Safetensors
Model size
3.2B params
Tensor type
BF16
Β·
F32
Β·
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Collection including LINs-lab/DynMoE-StableLM-1.6B