Masterjp123
commited on
Commit
•
4e83611
1
Parent(s):
1ee77f5
Update README.md
Browse files
README.md
CHANGED
@@ -19,17 +19,20 @@ license: llama2
|
|
19 |
language:
|
20 |
- en
|
21 |
---
|
22 |
-
#
|
|
|
23 |
|
24 |
-
|
|
|
|
|
|
|
|
|
25 |
|
26 |
## Merge Details
|
27 |
just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.
|
28 |
|
29 |
These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit it
|
30 |
|
31 |
-
|
32 |
-
|
33 |
### Merge Method
|
34 |
|
35 |
This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
|
|
|
19 |
language:
|
20 |
- en
|
21 |
---
|
22 |
+
# Model
|
23 |
+
This is the GPTQ 4bit quantized version of SnowyRP
|
24 |
|
25 |
+
[FP16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B)
|
26 |
+
|
27 |
+
[GPTQ](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ)
|
28 |
+
|
29 |
+
Any Future Quantizations I am made aware of will be added.
|
30 |
|
31 |
## Merge Details
|
32 |
just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure.
|
33 |
|
34 |
These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit it
|
35 |
|
|
|
|
|
36 |
### Merge Method
|
37 |
|
38 |
This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base.
|