Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,11 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
library_name: transformers
|
3 |
tags:
|
|
|
|
|
4 |
- 4-bit
|
5 |
- AWQ
|
6 |
- text-generation
|
@@ -10,6 +15,17 @@ pipeline_tag: text-generation
|
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
-
#
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- HuggingFaceH4/mistral-7b-grok
|
4 |
+
- OpenPipe/mistral-ft-optimized-1218
|
5 |
library_name: transformers
|
6 |
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
- 4-bit
|
10 |
- AWQ
|
11 |
- text-generation
|
|
|
15 |
inference: false
|
16 |
quantized_by: Suparious
|
17 |
---
|
18 |
+
# hibana2077/Pioneer-2x7B AWQ
|
19 |
|
20 |
+
- Model creator: [hibana2077](https://huggingface.co/hibana2077)
|
21 |
+
- Original model: [Pioneer-2x7B](https://huggingface.co/hibana2077/Pioneer-2x7B)
|
22 |
+
|
23 |
+
## Model Summary
|
24 |
+
|
25 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
26 |
+
|
27 |
+
This model was merged using the SLERP merge method.
|
28 |
+
|
29 |
+
The following models were included in the merge:
|
30 |
+
* [HuggingFaceH4/mistral-7b-grok](https://huggingface.co/HuggingFaceH4/mistral-7b-grok)
|
31 |
+
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
|