AlphaMonarch-7B / README.md
mlabonne's picture
Update README.md
3c5baf8 verified
|
raw
history blame
No virus
6.34 kB
metadata
license: cc-by-nc-4.0
tags:
  - merge
  - lazymergekit
  - dpo
  - rlhf
dataset:
  - mlabonne/truthy-dpo-v0.1
  - mlabonne/distilabel-intel-orca-dpo-pairs
  - mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
base_model:
  - mlabonne/NeuralMonarch-7B
language:
  - en

image/jpeg

πŸ‘‘ AlphaMonarch-7B

tl;dr: AlphaMonarch-7B is a new DPO merge that retains all the reasoning abilities of the very best merges and significantly improves its conversational abilities. Kind of the best of both worlds in a 7B model. πŸŽ‰

AlphaMonarch-7B is a DPO fine-tuned of mlabonne/NeuralMonarch-7B using the argilla/OpenHermes2.5-dpo-binarized-alpha preference dataset.

It is based on a merge of the following models using LazyMergekit:

Special thanks to Jon Durbin, Intel, and Argilla for the preference datasets.

πŸ” Applications

This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio).

It is one of the very best 7B models in terms of instructing following and reasoning abilities and can be used for conversations, RP, and storytelling. Note that it tends to have a quite formal and sophisticated style, but it can be changed by modifying the prompt.

⚑ Quantized models

πŸ† Evaluation

Nous

The evaluation was performed using LLM AutoEval on Nous suite. See the entire leaderboard here.

Model Average AGIEval GPT4All TruthfulQA Bigbench
AlphaMonarch-7B πŸ“„ 62.74 45.37 77.01 78.39 50.2
NeuralMonarch-7B πŸ“„ 62.73 45.31 76.99 78.35 50.28
Monarch-7B πŸ“„ 62.68 45.48 77.07 78.04 50.14
teknium/OpenHermes-2.5-Mistral-7B πŸ“„ 52.42 42.75 72.99 52.99 40.94
mlabonne/NeuralHermes-2.5-Mistral-7B πŸ“„ 53.51 43.67 73.24 55.37 41.76
mlabonne/NeuralBeagle14-7B πŸ“„ 60.25 46.06 76.77 70.32 47.86
mlabonne/NeuralOmniBeagle-7B πŸ“„ 62.3 45.85 77.26 76.06 50.03
eren23/dpo-binarized-NeuralTrix-7B πŸ“„ 62.5 44.57 76.34 79.81 49.27
CultriX/NeuralTrix-7B-dpo πŸ“„ 62.5 44.61 76.33 79.8 49.24

EQ-bench

AlphaMonarch-7B is the second best-performing 7B model on EQ-bench by Samuel J. Peach.

MT-Bench

########## First turn ##########
                                    score
model                       turn         
gpt-4                       1     8.95625
OmniBeagle-7B               1     8.32500 
AlphaMonarch-7B             1     8.23750
claude-v1                   1     8.15000
gpt-3.5-turbo               1     8.07500
claude-instant-v1           1     7.80000


########## Second turn ##########
                                     score
model                       turn          
gpt-4                       2     9.025000
claude-instant-v1           2     8.012658
gpt-3.5-turbo               2     7.812500
claude-v1                   2     7.650000
AlphaMonarch-7B             2     7.618750
OmniBeagle-7B               2     7.587500

########## Average ##########
                                score
model                                
gpt-4                        8.990625
OmniBeagle-7B                7.956250
gpt-3.5-turbo                7.943750
AlphaMonarch-7B              7.928125
claude-instant-v1            7.905660
claude-v1                    7.900000
NeuralBeagle14-7B            7.628125

Open LLM Leaderboard

AlphaMonarch-7B is one of the best-performing non-merge 7B models on the Open LLM Leaderboard:

image/png

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/AlphaMonarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])