Text Generation
Transformers
Safetensors
mixtral
mergekit
lazymergekit
jiayihao03/mistral-7b-instruct-Javascript-4bit
akameswa/mistral-7b-instruct-java-4bit
akameswa/mistral-7b-instruct-go-4bit
jiayihao03/mistral-7b-instruct-python-4bit
conversational
text-generation-inference
Inference Endpoints
4-bit precision
bitsandbytes
mixtral-4x7b-instruct-code
mixtral-4x7b-instruct-code is a MoE of the following models using mergekit:
- jiayihao03/mistral-7b-instruct-Javascript-4bit
- akameswa/mistral-7b-instruct-java-4bit
- akameswa/mistral-7b-instruct-go-4bit
- jiayihao03/mistral-7b-instruct-python-4bit
🧩 Configuration
yaml
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.