mradermacher commited on
Commit
7d398ca
1 Parent(s): 05e9e3a

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -16,7 +16,7 @@ quantized_by: mradermacher
16
  static quants of https://huggingface.co/leafspark/Reflection-Llama-3.1-70B-bf16
17
 
18
  <!-- provided-files -->
19
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -30,8 +30,18 @@ more details, including on how to concatenate multi-part files.
30
  | Link | Type | Size/GB | Notes |
31
  |:-----|:-----|--------:|:------|
32
  | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q2_K.gguf) | Q2_K | 26.5 | |
 
33
  | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
 
 
 
 
 
34
  | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
 
 
 
 
35
  | [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
36
 
37
  Here is a handy graph by ikawrakow comparing some lower-quality quant
 
16
  static quants of https://huggingface.co/leafspark/Reflection-Llama-3.1-70B-bf16
17
 
18
  <!-- provided-files -->
19
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-i1-GGUF
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
30
  | Link | Type | Size/GB | Notes |
31
  |:-----|:-----|--------:|:------|
32
  | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q2_K.gguf) | Q2_K | 26.5 | |
33
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
34
  | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
35
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
36
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.IQ3_M.gguf) | IQ3_M | 32.0 | |
37
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
38
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
39
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
40
  | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
41
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
42
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
43
+ | [GGUF](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
44
+ | [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
45
  | [PART 1](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Reflection-Llama-3.1-70B-bf16-GGUF/resolve/main/Reflection-Llama-3.1-70B-bf16.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
46
 
47
  Here is a handy graph by ikawrakow comparing some lower-quality quant