moreh-sungmin
commited on
Commit
•
7eabe68
1
Parent(s):
a7e19b1
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ language:
|
|
7 |
MoMo-70B is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
8 |
Note that we did not exploit any form of weight merge.
|
9 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
10 |
-
MoMo-70B is trained using Moreh's MoAI platform, which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
11 |
|
12 |
|
13 |
## Details
|
|
|
7 |
MoMo-70B is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
|
8 |
Note that we did not exploit any form of weight merge.
|
9 |
For leaderboard submission, the trained weight is realigned for compatibility with llama.
|
10 |
+
MoMo-70B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
|
11 |
|
12 |
|
13 |
## Details
|