File size: 1,422 Bytes
54c1cef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
datasets:
- PocketDoc/Retro-YahooAnswers
language:
- en
pipeline_tag: question-answering
base_model: mistralai/Mistral-7B-v0.1
---
### Description
Do you miss the vibes of the early 2000s? Yearn for the nostalgia of internet religious arguments? Then this model is for you!

This was trained on a scrape of Yahoo! Answers from 2007 and received no filtering save for basic sanity checks.

This is not intended for serious use but I think it's charming in a way.

### Prompt format: 
Pygmalion / Metharme

The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
```
<|system|>system message here<|user|>user message here<|model|>
```
```
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
```
```
<|system|>system message here<|model|>
```
```
<|system|>system message here<|model|>model message<|user|>user message here<|model|>
```

# Some quick and  dirty training details:
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Sequence length: 2048
- Training time: 32 hours
- Hardware: 1x RTX 4080
- Training type: QLoRA
- PEFT R/A: 32/32