|
--- |
|
license: cc-by-nc-4.0 |
|
base_model_relation: quantized |
|
quantized_by: Quant-Cartel |
|
base_model: rAIfle/Acolyte-22B |
|
pipeline_tag: text-generation |
|
tags: |
|
- iMat |
|
- GGUF |
|
--- |
|
``` |
|
e88 88e d8 |
|
d888 888b 8888 8888 ,"Y88b 888 8e d88 |
|
C8888 8888D 8888 8888 "8" 888 888 88b d88888 |
|
Y888 888P Y888 888P ,ee 888 888 888 888 |
|
"88 88" "88 88" "88 888 888 888 888 |
|
b |
|
8b, |
|
|
|
e88'Y88 d8 888 |
|
d888 'Y ,"Y88b 888,8, d88 ,e e, 888 |
|
C8888 "8" 888 888 " d88888 d88 88b 888 |
|
Y888 ,d ,ee 888 888 888 888 , 888 |
|
"88,d88 "88 888 888 888 "YeeP" 888 |
|
|
|
PROUDLY PRESENTS |
|
``` |
|
# Acolyte-22B-iMat-GGUF |
|
|
|
Quantized with love from fp32. |
|
|
|
Original model author: [rAIfle](https://huggingface.co/rAIfle/) |
|
|
|
* Importance Matrix calculated using [groups_merged.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
|
* 105 chunks |
|
* n_ctx=512 |
|
* Calculation uses fp32 precision model weights |
|
|
|
Original model README [here](https://huggingface.co/rAIfle/Acolyte-22B/) and below: |
|
|
|
# Acolyte-22B |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/3dcGMcrWK2-2vQh9QBt3o.png) |
|
|
|
LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size. |
|
Check the [LoRA](https://huggingface.co/rAIfle/Acolyte-LORA) for dataset info. |
|
|
|
Use `Mistral V2 & V3` template. |