JustinLin610
commited on
Commit
•
5044110
1
Parent(s):
97052d9
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/
|
|
23 |
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.
|
24 |
|
25 |
## Training details
|
26 |
-
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
|
27 |
|
28 |
## Requirements
|
29 |
The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
|
|
|
23 |
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.
|
24 |
|
25 |
## Training details
|
26 |
+
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
|
27 |
|
28 |
## Requirements
|
29 |
The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
|