Edit model card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Models Merged

The following models were included in the merge:

Benchmark results

1. MT-Bench from lmsys

We adapted the code from FastChat to benchmark our model with GPT-4 as a judge. Here is the result

|        | Model                                   | Turn | Score    |
|--------|-----------------------------------------|------|----------|
| First  | tlphams/Wizard-Zephyr-Orpo-8x22B        | 1    | 9.1625   |
|        | mistralai/Mixtral-8x22B-Instruct-v0.1   | 1    | 9.1500   |
| Second | tlphams/Wizard-Zephyr-Orpo-8x22B        | 2    | 8.873418 |
|        | mistralai/Mixtral-8x22B-Instruct-v0.1   | 2    | 8.250000 |
| Average| tlphams/Wizard-Zephyr-Orpo-8x22B        |      | 9.018868 |
|        | mistralai/Mixtral-8x22B-Instruct-v0.1   |      | 8.700000 |

The score is slightly lower than alpindale/WizardLM-2-8x22B, but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^

Downloads last month
11
Safetensors
Model size
141B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tlphams/Wizard-Zephyr-Orpo-8x22B