---
base_model: anthracite-org/magnum-12b-v2.5-kto
library_name: transformers
quantized_by: InferenceIllusionist
tags:
- iMat
- gguf
- Mistral
license: apache-2.0
---
# magnum-12b-v2.5-kto-iMat-GGUF
> [!WARNING]
>Important Note: Inferencing in llama.cpp has now been merged in [PR #8604](https://github.com/ggerganov/llama.cpp/pull/8604). Please ensure you are on release [b3438](https://github.com/ggerganov/llama.cpp/releases/tag/b3438) or newer. Text-generation-web-ui (Ooba) is also working as of 7/23. Kobold.cpp working as of [v1.71](https://github.com/LostRuins/koboldcpp/releases/tag/v1.71).
Quantized from magnum-12b-v2.5-kto fp16
* Weighted quantizations were creating using fp16 GGUF and [groups_merged.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) (special thanks to [Kalomaze](https://huggingface.co/kalomaze)) in 92 chunks and n_ctx=512
* Static fp16 will also be included in repo
* For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
* All quants are verified working prior to uploading to repo for your safety and convenience
KL-Divergence Reference Chart
(Click on image to view in full size)
[](https://i.imgur.com/mV0nYdA.png)
> [!TIP]
>Quant-specific Tips:
>* If you are getting a `cudaMalloc failed: out of memory` error, try passing an argument for lower context in llama.cpp, e.g. for 8k: `-c 8192`
>* If you have all ampere generation or newer cards, you can use flash attention like so: `-fa`
>* Provided Flash Attention is enabled you can also use quantized cache to save on VRAM e.g. for 8-bit: `-ctk q8_0 -ctv q8_0`
Original model card can be found [here](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto)