GGUF version of Felladrin/Minueza-32M-UltraChat.
It was not possible to quantize the model, so only the F16 and F32 GGUF files are available.
Try it with llama.cpp
brew install ggerganov/ggerganov/llama.cpp
llama-cli \
--hf-repo Felladrin/gguf-Minueza-32M-UltraChat \
--model Minueza-32M-UltraChat.F32.gguf \
--random-prompt \
--temp 1.3 \
--dynatemp-range 1.2 \
--top-k 0 \
--top-p 1 \
--min-p 0.1 \
--typical 0.85 \
--mirostat 2 \
--mirostat-ent 3.5 \
--repeat-penalty 1.1 \
--repeat-last-n -1 \
-n 256
- Downloads last month
- 44
Model tree for Felladrin/gguf-Minueza-32M-UltraChat
Base model
Felladrin/Minueza-32M-Base
Finetuned
Felladrin/Minueza-32M-UltraChat