InferenceIllusionist commited on
Commit
3d36520
1 Parent(s): e53f681

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model_relation: quantized
4
+ quantized_by: Quant-Cartel
5
+ base_model: InferenceIllusionist/SorcererLM-22B
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - iMat
9
+ - GGUF
10
+ ---
11
+ ```
12
+ e88 88e d8
13
+ d888 888b 8888 8888 ,"Y88b 888 8e d88
14
+ C8888 8888D 8888 8888 "8" 888 888 88b d88888
15
+ Y888 888P Y888 888P ,ee 888 888 888 888
16
+ "88 88" "88 88" "88 888 888 888 888
17
+ b
18
+ 8b,
19
+
20
+ e88'Y88 d8 888
21
+ d888 'Y ,"Y88b 888,8, d88 ,e e, 888
22
+ C8888 "8" 888 888 " d88888 d88 88b 888
23
+ Y888 ,d ,ee 888 888 888 888 , 888
24
+ "88,d88 "88 888 888 888 "YeeP" 888
25
+
26
+ PROUDLY PRESENTS
27
+ ```
28
+ # SorcererLM-22B-iMat-GGUF
29
+
30
+ Quantized with love from fp32.
31
+
32
+ * Importance Matrix calculated using [groups_merged.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
33
+ * 107 chunks
34
+ * n_ctx=512
35
+ * Importance Matrix uses fp32 precision model weights, fp32.imatrix file to be added in repo
36
+
37
+ Original model README [here](https://huggingface.co/rAIfle/Acolyte-22B/) and below:
38
+
39
+ ## SorcererLM-22B
40
+
41
+ <img src="https://files.catbox.moe/ya4zca.png" width="500"/>
42
+ <i>Because good things always come in threes!</i>
43
+
44
+
45
+ **SorcererLM-22B** is here, rounding out the trinity of Mistral-Small-Instruct tunes from the [Quant Cartel](https://huggingface.co/Quant-Cartel).
46
+
47
+
48
+ ## Prompt Format
49
+
50
+ * Basic: Mistral V2 & V3 Context / Instruct Templates (Now available on ST Staging branch)
51
+ * Advanced: TBA
52
+
53
+ ## Quantized Versions
54
+
55
+ * Coming soon
56
+
57
+ ## Training
58
+
59
+ For starters this is a LORA tune on top of Mistral-Small-Instruct-2409 and **not** a pruned version of [SorcererLM-8x22b](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16).
60
+
61
+ Trained with a whole lot of love on 1 epoch of cleaned and deduped c2 logs. This model is 100% 'born-local', the result of roughly 27 hours and a little bit of patience on a single RTX 4080 SUPER.
62
+
63
+ As hyperparameters and dataset intentionally mirror ones used in the original Sorcerer 8x22b tune, this is considered its 'lite' counterpart aiming to provide the same bespoke conversational experience relative to its size and reduced hardware requirements.
64
+
65
+ While all three share the same Mistral-Small-Instruct base, in contrast to its sisters [Mistral-Small-NovusKyver](https://huggingface.co/Envoid/Mistral-Small-NovusKyver) and [Acolyte-22B](https://huggingface.co/rAIfle/Acolyte-22B) this release did not SLERP the resulting model with the original in a 50/50 ratio post-training. Instead, alpha was dropped when the lora was merged with full precision weights in the final step.
66
+
67
+ ## Acknowledgments
68
+
69
+ * First and foremost a huge thank you my brilliant teammates [envoid](https://huggingface.co/envoid/) and [rAIfle](https://huggingface.co/rAIfle/). Special shout-out to rAIfle for critical last minute advice that got this one through the finish line
70
+ * Props to unsloth as well for helping make this local tune possible
71
+ * And of course, none of this would matter without users like you. Thank you :)
72
+
73
+ ## Safety
74
+ ...