Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- ewof/koishi-instruct-metharme
|
4 |
+
---
|
5 |
+
|
6 |
+
## Training
|
7 |
+
|
8 |
+
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
|
9 |
+
on a 4x nvidia a100 gpu cluster.
|
10 |
+
|
11 |
+
the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).
|
12 |
+
|
13 |
+
trained on koishi commit 6e675d1 for one epoch
|
14 |
+
|
15 |
+
## Base Model
|
16 |
+
|
17 |
+
rank 16 qlora tune of mistralai/Mixtral-8x7B-v0.1 (all modules, merged)
|
18 |
+
|
19 |
+
## Prompting
|
20 |
+
|
21 |
+
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
|
22 |
+
|
23 |
+
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
|