chrischain
commited on
Commit
•
8e462ba
1
Parent(s):
bd395eb
Update README.md
Browse filesfully custom dataset fine-tune
README.md
CHANGED
@@ -1,7 +1,5 @@
|
|
1 |
---
|
2 |
license: cc-by-2.0
|
3 |
-
datasets:
|
4 |
-
- teknium/OpenHermes-2.5
|
5 |
language:
|
6 |
- en
|
7 |
tags:
|
@@ -11,10 +9,10 @@ tags:
|
|
11 |
- art
|
12 |
---
|
13 |
|
14 |
-
Behold, one of the first fine-tunes of Mistral's 7B 0.2 Base model. SatoshiN is trained on 4 epochs of a diverse custom data-set, combined with a
|
15 |
It's a nice assistant that isn't afraid to ask questions, and gather additional information before providing a response to user prompts.
|
16 |
|
17 |
-
I have found success using
|
18 |
|
19 |
Total model-size has increased from 7.24B to 7.35B after merging a .5GB LoRa via PEFT.
|
20 |
|
|
|
1 |
---
|
2 |
license: cc-by-2.0
|
|
|
|
|
3 |
language:
|
4 |
- en
|
5 |
tags:
|
|
|
9 |
- art
|
10 |
---
|
11 |
|
12 |
+
Behold, one of the first fine-tunes of Mistral's 7B 0.2 Base model. SatoshiN is trained on 4 epochs 2e-4 learning rate (cosine) of a diverse custom data-set, combined with a polishing round of that same data-set at a 1e-4 linear learning rate.
|
13 |
It's a nice assistant that isn't afraid to ask questions, and gather additional information before providing a response to user prompts.
|
14 |
|
15 |
+
I have found varying success using instruction-formats such as Alpaca, ChatML and Mistral. The custom training was performed on raw-text with the idea that it might acquire better generalization skills.
|
16 |
|
17 |
Total model-size has increased from 7.24B to 7.35B after merging a .5GB LoRa via PEFT.
|
18 |
|