InferenceIllusionist's picture
Update README.md
c992d70 verified
|
raw
history blame
1.36 kB
metadata
base_model: mistralai/Mistral-Nemo-Instruct-2407
library_name: transformers
quantized_by: InferenceIllusionist
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - ru
  - zh
  - ja
tags:
  - iMat
  - gguf
  - Mistral
license: apache-2.0

Mistral-Nemo-Instruct-12B-iMat-GGUF

Important Note: Inferencing is only available on this fork of llama.cpp at the moment: https://github.com/ggerganov/llama.cpp/pull/8604

Other front-ends like the main branch of llama.cpp, kobold.cpp, and text-generation-web-ui may not work as intended

Quantized from fp16.

  • Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 92 chunks and n_ctx=512
  • Static fp16 will also be included in repo

For a brief rundown of iMatrix quant performance please see this PR

All quants are verified working prior to uploading to repo for your safety and convenience

KL-Divergence Reference Chart (Click on image to view in full size)

Tips: There's no need to download the entire repo, just pick one of the GGUF files.

Original model card can be found here