Edit model card

BigMaid-20B-v1.0

exllamav2 quant for TeeZee/BigMaid-20B-v1.0

Should run on 12 GB of VRAM cards in webui with context length set to 4096, ExLlamav2_HF loader and cache_8bit=True

All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: Buy Me A Coffee

Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.