Text Generation
Transformers
GGUF
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
gemma
conversational
arxiv:2312.11805
arxiv:2009.03300
arxiv:1905.07830
arxiv:1911.11641
arxiv:1904.09728
arxiv:1905.10044
arxiv:1907.10641
arxiv:1811.00937
arxiv:1809.02789
arxiv:1911.01547
arxiv:1705.03551
arxiv:2107.03374
arxiv:2108.07732
arxiv:2110.14168
arxiv:2304.06364
arxiv:2206.04615
arxiv:1804.06876
arxiv:2110.08193
Inference Endpoints
has_space
text-generation-inference
MaziyarPanahi
commited on
Commit
•
be3b166
1
Parent(s):
c94a971
6a6cf1addcdd1bcaf1b9663d30f1e860a2fb7a9969aacf3a0a04834e7ea49b15
Browse files- .gitattributes +1 -0
- gemma-1.1-2b-it.Q3_K_L.gguf +3 -0
.gitattributes
CHANGED
@@ -36,3 +36,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
36 |
gemma-1.1-2b-it.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
gemma-1.1-2b-it.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
gemma-1.1-2b-it.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
36 |
gemma-1.1-2b-it.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
gemma-1.1-2b-it.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
gemma-1.1-2b-it.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
gemma-1.1-2b-it.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
gemma-1.1-2b-it.Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dc243133f5a1112c2ee88af28c5dade536d1d7e6f5e4c718daefab8384a04bf8
|
3 |
+
size 1465591552
|