bunnycore commited on
Commit
0d4e12c
1 Parent(s): 5742757

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
 
11
  # bunnycore/Phigments12-Q6_K-GGUF
12
 
13
- Phigments12-Q6_K-GGUF is a quantized version of the liminerity/Phigments12: https://huggingface.co/liminerity/Phigments12 model. Quantization is a technique that reduces the size and memory footprint of a model, making it efficient to run on devices with limited resources. Phigments12-Q6_K-GGUF packs 2.78 billion parameters, making it a compact model that delivers high performance and decent benchmark results. This efficiency allows you to run the model on low-end laptops, phones, and even PCs without a dedicated GPU.
14
 
15
  Several platforms support running Phigments12-Q6_K-GGUF, including:
16
 
 
10
 
11
  # bunnycore/Phigments12-Q6_K-GGUF
12
 
13
+ Phigments12-Q6_K-GGUF is a quantized version of the liminerity/Phigments12: https://huggingface.co/liminerity/Phigments12 model. Phigments12-Q6_K-GGUF packs 2.78 billion parameters, making it a compact model that delivers high performance and decent benchmark results. This efficiency allows you to run the model on low-end laptops, phones, and even PCs without a dedicated GPU.
14
 
15
  Several platforms support running Phigments12-Q6_K-GGUF, including:
16