InferenceIllusionist commited on
Commit
2845bbe
1 Parent(s): b552931

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model_relation: quantized
4
+ quantized_by: Quant-Cartel
5
+ base_model: rAIfle/Acolyte-22B
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - iMat
9
+ - GGUF
10
+ ---
11
+ ```
12
+ e88 88e d8
13
+ d888 888b 8888 8888 ,"Y88b 888 8e d88
14
+ C8888 8888D 8888 8888 "8" 888 888 88b d88888
15
+ Y888 888P Y888 888P ,ee 888 888 888 888
16
+ "88 88" "88 88" "88 888 888 888 888
17
+ b
18
+ 8b,
19
+
20
+ e88'Y88 d8 888
21
+ d888 'Y ,"Y88b 888,8, d88 ,e e, 888
22
+ C8888 "8" 888 888 " d88888 d88 88b 888
23
+ Y888 ,d ,ee 888 888 888 888 , 888
24
+ "88,d88 "88 888 888 888 "YeeP" 888
25
+
26
+ PROUDLY PRESENTS
27
+ ```
28
+ # Acolyte-22B-iMat-GGUF
29
+
30
+ Quantized with love from fp32.
31
+
32
+ Original model author: [rAIfle](https://huggingface.co/rAIfle/)
33
+
34
+ * Importance Matrix calculated using [groups_merged.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
35
+ * 105 chunks
36
+ * n_ctx=512
37
+ * Calculation uses fp32 precision model weights
38
+
39
+ Original model README [here](https://huggingface.co/rAIfle/Acolyte-22B/) and below:
40
+
41
+ # Acolyte-22B
42
+
43
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/3dcGMcrWK2-2vQh9QBt3o.png)
44
+
45
+ LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size.
46
+ Check the [LoRA](https://huggingface.co/rAIfle/Acolyte-LORA) for dataset info.
47
+
48
+ Use `Mistral V2 & V3` template.