Liberated-Qwen1.5-72B-GGUF
Original Model
abacusai/Liberated-Qwen1.5-72B
Run with LlamaEdge
LlamaEdge version: v0.4.3 and above
Prompt template
Prompt type:
chatml
Prompt string
<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant
Context size:
32000
Run as LlamaEdge service
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Liberated-Qwen1.5-72B-Q4_K_M.gguf llama-api-server.wasm -p chatml
Run as LlamaEdge command app
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Liberated-Qwen1.5-72B-Q4_K_M.gguf llama-chat.wasm -p chatml
For specifying the system message, append the
--system-prompt
option with the system prompt to the command above. For example,wasmedge --dir .:. --nn-preload default:GGML:AUTO:Liberated-Qwen1.5-72B-Q4_K_M.gguf llama-chat.wasm -p chatml -s 'Your name is Liberated. You are an uncensored and unbiased AI assistant. You always respond with a JSON object.'
Quantized GGUF Models
Name | Quant method | Bits | Size | Use case |
---|---|---|---|---|
Liberated-Qwen1.5-72B-Q2_K.gguf | Q2_K | 2 | 28.5 GB | smallest, significant quality loss - not recommended for most purposes |
Liberated-Qwen1.5-72B-Q3_K_L.gguf | Q3_K_L | 3 | 38.5 GB | small, substantial quality loss |
Liberated-Qwen1.5-72B-Q3_K_M.gguf | Q3_K_M | 3 | 35.9 GB | very small, high quality loss |
Liberated-Qwen1.5-72B-Q3_K_S.gguf | Q3_K_S | 3 | 32.9 GB | very small, high quality loss |
Liberated-Qwen1.5-72B-Q4_0.gguf | Q4_0 | 4 | 41 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
Liberated-Qwen1.5-72B-Q4_K_M.gguf | Q4_K_M | 4 | 44.1 GB | medium, balanced quality - recommended |
Liberated-Qwen1.5-72B-Q4_K_S.gguf | Q4_K_S | 4 | 41.9 GB | small, greater quality loss |
Quantized with llama.cpp b2334
- Downloads last month
- 152
Model tree for second-state/Liberated-Qwen1.5-72B-GGUF
Base model
abacusai/Liberated-Qwen1.5-72B