Edit model card

Undi95/BagelMix-8x7B 3.5bpw

Exllama quant of Undi95/BagelMix-8x7B

You will need 24gb of vram to run this model at about half context (16k, you can probably go a bit higher too)

Prompt format:

ChatML? maybe? Unclear

<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>

Contact

Kooten on discord.

Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Kooten/BagelMix-8x7B-3.5bpw-exl2