update README
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ license: apache-2.0
|
|
20 |
- 2/7/2024 - [Serp-ai](https://github.com/serp-ai/Parameter-Efficient-MoE) adds [unsloth](https://github.com/serp-ai/unsloth) support for faster and memory efficient training of our Parameter-Efficient Sparsity Crafting and releases new [sparsetral](https://huggingface.co/serpdotai/sparsetral-16x7B-v2) models based on mistral-7B.
|
21 |
- 1/10/2024 - Camelidae models are now available on 🤗 [HuggingFace](https://huggingface.co/hywu).
|
22 |
- 1/4/2024 - We released the paper, [Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731).
|
23 |
-
- 12/22/2023 - We released the training repo that craft the dense model with LLaMA architecture to the MoE model.
|
24 |
## Introduction
|
25 |
Camelidae and Qwen2idae models are trained utilizing Parameter-Efficient Sparsity Crafting techniques
|
26 |
|
|
|
20 |
- 2/7/2024 - [Serp-ai](https://github.com/serp-ai/Parameter-Efficient-MoE) adds [unsloth](https://github.com/serp-ai/unsloth) support for faster and memory efficient training of our Parameter-Efficient Sparsity Crafting and releases new [sparsetral](https://huggingface.co/serpdotai/sparsetral-16x7B-v2) models based on mistral-7B.
|
21 |
- 1/10/2024 - Camelidae models are now available on 🤗 [HuggingFace](https://huggingface.co/hywu).
|
22 |
- 1/4/2024 - We released the paper, [Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731).
|
23 |
+
- 12/22/2023 - We released the training [repo](https://github.com/wuhy68/Parameter-Efficient-MoE) that craft the dense model with LLaMA architecture to the MoE model.
|
24 |
## Introduction
|
25 |
Camelidae and Qwen2idae models are trained utilizing Parameter-Efficient Sparsity Crafting techniques
|
26 |
|