Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Llama 3 70B Instruct Storywriter
|
2 |
+
Llama 3 70B Instruct, further finetuned on a dataset consisting of books in the fiction genre.
|
3 |
+
|
4 |
+
This was just an experiment, but it turned out well enough that I'm sharing it. The finetuning has caused a significant shift in the model's writing style, and seems to have made it more creative. There may be a slight decrease in overall intelligence.
|
5 |
+
|
6 |
+
Because this was trained on Instruct, you can use the normal Instruct chat formatting. It may also work well in raw completion mode.
|
7 |
+
|
8 |
+
## Training details
|
9 |
+
Trained on 4 4090s using [qlora-pipe](https://github.com/tdrussell/qlora-pipe).
|
10 |
+
Dataset consists of about 800 books in the fiction genre, totaling 570 MB of raw text.
|
11 |
+
Rank 64 QLoRA trained at 8192 sequence length.
|
12 |
+
|
13 |
+
## Why no 8B?
|
14 |
+
I tried multiple times to train this on Llama 3 8B Instruct, using a variety of hyperparameters. It never worked well. The model took a huge hit to intelligence every time, to the point of being unusable. 70B fared much better. I don't know why, maybe 8B is just too small for this type of technique, and loses too much of the instruction-tuned smarts.
|