File size: 1,681 Bytes
29aa5fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
base_model: rishiraj/smol-3b
inference: false
library_name: peft
license: apache-2.0
model-index:
- name: smol-3b
  results: []
model_creator: rishiraj
model_name: smol-3b
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- generated_from_trainer
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# rishiraj/smol-3b-GGUF

Quantized GGUF model files for [smol-3b](https://huggingface.co/rishiraj/smol-3b) from [rishiraj](https://huggingface.co/rishiraj)


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [smol-3b.fp16.gguf](https://huggingface.co/afrideva/smol-3b-GGUF/resolve/main/smol-3b.fp16.gguf) | fp16 | 6.04 GB  |
| [smol-3b.q2_k.gguf](https://huggingface.co/afrideva/smol-3b-GGUF/resolve/main/smol-3b.q2_k.gguf) | q2_k | 1.30 GB  |
| [smol-3b.q3_k_m.gguf](https://huggingface.co/afrideva/smol-3b-GGUF/resolve/main/smol-3b.q3_k_m.gguf) | q3_k_m | 1.51 GB  |
| [smol-3b.q4_k_m.gguf](https://huggingface.co/afrideva/smol-3b-GGUF/resolve/main/smol-3b.q4_k_m.gguf) | q4_k_m | 1.85 GB  |
| [smol-3b.q5_k_m.gguf](https://huggingface.co/afrideva/smol-3b-GGUF/resolve/main/smol-3b.q5_k_m.gguf) | q5_k_m | 2.15 GB  |
| [smol-3b.q6_k.gguf](https://huggingface.co/afrideva/smol-3b-GGUF/resolve/main/smol-3b.q6_k.gguf) | q6_k | 2.48 GB  |
| [smol-3b.q8_0.gguf](https://huggingface.co/afrideva/smol-3b-GGUF/resolve/main/smol-3b.q8_0.gguf) | q8_0 | 3.21 GB  |



## Original Model Card:
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# smol-3b

See how open weights instead of open source feel like!