Update README.md
Browse files
README.md
CHANGED
@@ -1,24 +1,24 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
license_name: llama-2
|
4 |
license_link: LICENSE
|
5 |
language:
|
6 |
-
|
7 |
pipeline_tag: text-generation
|
8 |
inference: false
|
9 |
tags:
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
---
|
16 |
# LLaMA2-13B-Erebus
|
17 |
## Model description
|
18 |
This is the third generation of the original Shinen made by Mr. Seeker. The full dataset consists of 8 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
|
19 |
|
20 |
## Training procedure
|
21 |
-
|
22 |
|
23 |
## Training data
|
24 |
The data can be divided in 8 different datasets:
|
@@ -36,4 +36,4 @@ The dataset uses `[Genre: <comma-separated list of genres>]` for tagging.
|
|
36 |
The full dataset is 2.3B tokens in size, and contains material that is "copyrighted".
|
37 |
|
38 |
## Limitations and biases
|
39 |
-
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). **Warning: This model has a very strong NSFW bias!**
|
|
|
1 |
---
|
2 |
+
license: llama2
|
3 |
license_name: llama-2
|
4 |
license_link: LICENSE
|
5 |
language:
|
6 |
+
- en
|
7 |
pipeline_tag: text-generation
|
8 |
inference: false
|
9 |
tags:
|
10 |
+
- pytorch
|
11 |
+
- llama
|
12 |
+
- llama-2
|
13 |
+
- finetuned
|
14 |
+
- not-for-all-audiences
|
15 |
---
|
16 |
# LLaMA2-13B-Erebus
|
17 |
## Model description
|
18 |
This is the third generation of the original Shinen made by Mr. Seeker. The full dataset consists of 8 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
|
19 |
|
20 |
## Training procedure
|
21 |
+
LLaMA2-13B-Erebus was trained on 8x A6000 Ada GPU's for a single epoch. No special frameworks have been used.
|
22 |
|
23 |
## Training data
|
24 |
The data can be divided in 8 different datasets:
|
|
|
36 |
The full dataset is 2.3B tokens in size, and contains material that is "copyrighted".
|
37 |
|
38 |
## Limitations and biases
|
39 |
+
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). **Warning: This model has a very strong NSFW bias!**
|