Perplexity loss?

#11
by JermemyHaschal - opened

Hello,
How is the perplexity loss for this finetune on different quants? Is it the same as the original Qwen2-Instruct GGUF?

Hi,

These are quantized GGUF models of a fine-tuned model over the original Qwen2-Instruct. Looking at the score of my fine-tuned model (https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) vs. original model, I wouldn't expect to see a huge difference in Perplexity in these quants. Specially, I used imatrix in both quantizations, so the quality should be pretty good.

JermemyHaschal changed discussion status to closed

Sign up or log in to comment