Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is the 4bit converted version for use with oobabooga's text-generation-webui.

All credits go to this amazing project: https://github.com/FreedomIntelligence/LLMZoo

This is the chat-instruct version

Converted with python llama.py ./chimera-7b c4 --wbits 4 --true-sequential --groupsize 128 --save chimera7b-4bit-128g.pt

It uses groupsize 128. Doesn't use act-order and got quantized with the oobabooga gpt-q branch so it works there.

Anyone need a 13b version? (Edit: Can't do it right now as I only get out of memory errors while quantizing.)

Downloads last month
27
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.