Triangle104
commited on
Commit
•
ed16b46
1
Parent(s):
f3c1042
Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,67 @@ tags:
|
|
10 |
This model was converted to GGUF format from [`ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2`](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
11 |
Refer to the [original model card](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) for more details on the model.
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## Use with llama.cpp
|
14 |
Install llama.cpp through brew (works on Mac and Linux)
|
15 |
|
|
|
10 |
This model was converted to GGUF format from [`ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2`](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
11 |
Refer to the [original model card](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) for more details on the model.
|
12 |
|
13 |
+
---
|
14 |
+
Model details:
|
15 |
+
-
|
16 |
+
Llama-3.1-8B-ArliAI-RPMax-v1.2
|
17 |
+
|
18 |
+
=====================================
|
19 |
+
|
20 |
+
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
|
21 |
+
|
22 |
+
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
|
23 |
+
|
24 |
+
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
|
25 |
+
|
26 |
+
We also have a models ranking page at https://www.arliai.com/models-ranking
|
27 |
+
|
28 |
+
Ask questions in our new Discord Server! https://discord.com/invite/t75KbPgwhk
|
29 |
+
Model Description
|
30 |
+
|
31 |
+
Llama-3.1-8B-ArliAI-RPMax-v1.2 is a variant of the Meta-Llama-3.1-8B model.
|
32 |
+
|
33 |
+
v1.2 update is a retrain using an incremental improvement of the RPMax dataset which dedups the dataset even more and better filtering to cutout irrelevant description text that came from card sharing sites.
|
34 |
+
Specs
|
35 |
+
|
36 |
+
Context Length: 128K
|
37 |
+
Parameters: 8B
|
38 |
+
|
39 |
+
Training Details
|
40 |
+
|
41 |
+
Sequence Length: 8192
|
42 |
+
Training Duration: Approximately 1 day on 2x3090Ti
|
43 |
+
Epochs: 1 epoch training for minimized repetition sickness
|
44 |
+
LORA: 64-rank 128-alpha, resulting in ~2% trainable weights
|
45 |
+
Learning Rate: 0.00001
|
46 |
+
Gradient accumulation: Very low 32 for better learning.
|
47 |
+
|
48 |
+
Quantization
|
49 |
+
|
50 |
+
The model is available in quantized formats:
|
51 |
+
|
52 |
+
We recommend using full weights or GPTQ
|
53 |
+
|
54 |
+
FP16: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
|
55 |
+
GGUF: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2-GGUF
|
56 |
+
|
57 |
+
Suggested Prompt Format
|
58 |
+
|
59 |
+
Llama 3 Instruct Format
|
60 |
+
|
61 |
+
Example:
|
62 |
+
|
63 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
64 |
+
|
65 |
+
You are [character]. You have a personality of [personality description]. [Describe scenario]<|eot_id|><|start_header_id|>user<|end_header_id|>
|
66 |
+
|
67 |
+
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
68 |
+
|
69 |
+
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
70 |
+
|
71 |
+
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
72 |
+
|
73 |
+
---
|
74 |
## Use with llama.cpp
|
75 |
Install llama.cpp through brew (works on Mac and Linux)
|
76 |
|