Edit model card

Mixtraln't 4x7B

Oh boy, a new model architecture in Transformers! Time to do profane things with it.

What if instead of training a MoE from scratch, we took some pre-trained Mistral models and shoved them in a little clown car?

Uses parts from the following models:

Works and generates coherent text. The big question here is if the hack I used to populate the MoE gates works well enough to take advantage of all of the experts. Let's find out!

Prompt format: maybe alpaca??? or chatml??? life is full of mysteries

Downloads last month
1,199
Safetensors
Model size
24.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chargoddard/mixtralnt-4x7b-test

Quantizations
3 models

Collection including chargoddard/mixtralnt-4x7b-test