Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
FallenMerick
/
Space-Whale-Lite-13B-GGUF
like
0
Text Generation
GGUF
quantized
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Merge
frankenmerge
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Space-Whale-Lite-13B-GGUF
1 contributor
History:
6 commits
FallenMerick
Create README.md
0f588b8
verified
6 months ago
.gitattributes
Safe
1.79 kB
Upload Space-Whale-Lite-13B-Q8_0.gguf
6 months ago
README.md
Safe
348 Bytes
Create README.md
6 months ago
Space-Whale-Lite-13B-Q4_K_M.gguf
Safe
7.87 GB
LFS
Upload Space-Whale-Lite-13B-Q4_K_M.gguf
6 months ago
Space-Whale-Lite-13B-Q5_K_M.gguf
Safe
9.23 GB
LFS
Upload Space-Whale-Lite-13B-Q5_K_M.gguf
6 months ago
Space-Whale-Lite-13B-Q6_K.gguf
Safe
10.7 GB
LFS
Upload Space-Whale-Lite-13B-Q6_K.gguf
6 months ago
Space-Whale-Lite-13B-Q8_0.gguf
Safe
13.8 GB
LFS
Upload Space-Whale-Lite-13B-Q8_0.gguf
6 months ago