Edit model card

GGUFs PLUS:

Q8 and Q6 GGUFs with critical parts of the model in F16 / Full precision.

File sizes will be slightly larger than standard, but should yeild higher quality results under all tasks and conditions.

Downloads last month
7
GGUF
Model size
10.7B params
Architecture
llama

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Collection including DavidAU/LemonadeRP-4.5.3-11B-GGUF-Plus