Edit model card

L3.1-Celestial-Stone-2x8B-DPO (GGUFs)

This model was converted to GGUF format from v000000/L3.1-Celestial-Stone-2x8B-DPO using llama.cpp. Refer to the original model card for more details on the model.

image/png

Ordered by quality:

  • q8_0 imatrix --- 14.2g
  • q6_k imatrix --- 11.2g
  • q5_k_s imatrix --- 9.48g
  • iq4_xs imatrix --- 7.44g

Missing? See mradermacher i1 for more types of imatrix quants.

imatrix data (V2 - 287kb) randomized bartowski, kalomeze groups, ERP/RP snippets, working gpt4 code, toxic qa, human messaging, randomized posts, story, novels

Downloads last month
23
GGUF
Model size
13.7B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for v000000/L3.1-Celestial-Stone-2x8B-DPO-GGUFs-IMATRIX

Quantized
this model

Collection including v000000/L3.1-Celestial-Stone-2x8B-DPO-GGUFs-IMATRIX