Update README.md
Browse files
README.md
CHANGED
@@ -4,10 +4,10 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
# **Introduction**
|
7 |
-
MoMo-
|
8 |
Note that we did not exploit any form of weight merge.
|
9 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
10 |
-
MoMo-
|
11 |
|
12 |
|
13 |
## Details
|
@@ -35,8 +35,8 @@ MoMo-70B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https
|
|
35 |
import torch
|
36 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
37 |
|
38 |
-
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-
|
39 |
model = AutoModelForCausalLM.from_pretrained(
|
40 |
-
"moreh/MoMo-
|
41 |
)
|
42 |
```
|
|
|
4 |
- en
|
5 |
---
|
6 |
# **Introduction**
|
7 |
+
MoMo-72B is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
8 |
Note that we did not exploit any form of weight merge.
|
9 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
10 |
+
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
11 |
|
12 |
|
13 |
## Details
|
|
|
35 |
import torch
|
36 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
37 |
|
38 |
+
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-LoRA-V1.4")
|
39 |
model = AutoModelForCausalLM.from_pretrained(
|
40 |
+
"moreh/MoMo-72B-LoRA-V1.4"
|
41 |
)
|
42 |
```
|