bunnycore commited on
Commit
5742757
1 Parent(s): 8a069dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -9,8 +9,19 @@ tags:
9
  ---
10
 
11
  # bunnycore/Phigments12-Q6_K-GGUF
12
- This model was converted to GGUF format from [`liminerity/Phigments12`](https://huggingface.co/liminerity/Phigments12) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
- Refer to the [original model card](https://huggingface.co/liminerity/Phigments12) for more details on the model.
 
 
 
 
 
 
 
 
 
 
 
14
  ## Use with llama.cpp
15
 
16
  Install llama.cpp through brew.
@@ -37,3 +48,6 @@ Note: You can also use this checkpoint directly through the [usage steps](https:
37
  ```
38
  git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phigments12.Q6_K.gguf -n 128
39
  ```
 
 
 
 
9
  ---
10
 
11
  # bunnycore/Phigments12-Q6_K-GGUF
12
+
13
+ Phigments12-Q6_K-GGUF is a quantized version of the liminerity/Phigments12: https://huggingface.co/liminerity/Phigments12 model. Quantization is a technique that reduces the size and memory footprint of a model, making it efficient to run on devices with limited resources. Phigments12-Q6_K-GGUF packs 2.78 billion parameters, making it a compact model that delivers high performance and decent benchmark results. This efficiency allows you to run the model on low-end laptops, phones, and even PCs without a dedicated GPU.
14
+
15
+ Several platforms support running Phigments12-Q6_K-GGUF, including:
16
+
17
+ ```
18
+ Jan.ai
19
+ LM Studio
20
+ Text Generation Web UI
21
+
22
+ ```
23
+
24
+
25
  ## Use with llama.cpp
26
 
27
  Install llama.cpp through brew.
 
48
  ```
49
  git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phigments12.Q6_K.gguf -n 128
50
  ```
51
+
52
+ This model was converted to GGUF format from [`liminerity/Phigments12`](https://huggingface.co/liminerity/Phigments12) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
53
+ Refer to the [original model card](https://huggingface.co/liminerity/Phigments12) for more details on the model.