|
--- |
|
tags: |
|
- merge |
|
- gguf |
|
- not-for-all-audiences |
|
- storywriting |
|
- text adventure |
|
--- |
|
|
|
# maid-yuzu-v8-alter-iMat-GGUF |
|
|
|
<b>Highly requested model.</b> Quantized from fp16 with love. iMatrix file calculated from Q8 quant using an input file from [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
|
|
|
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747) |
|
|
|
<i>All quants are verified working prior to uploading to repo for your safety and convenience</i> |
|
|
|
Original model card can be found [here](https://huggingface.co/rhplus0831/maid-yuzu-v8-alter) |