Edit model card

NeuralBeagle14-7B

Update 01/16/24: NeuralBeagle14-7B is probably the best 7B model you can find. πŸŽ‰

NeuralBeagle14-7B is a DPO fine-tune of mlabonne/Beagle14-7B using the argilla/distilabel-intel-orca-dpo-pairs preference dataset and my DPO notebook from this article.

Thanks Argilla for providing the dataset and the training recipe here. πŸ’ͺ

πŸ” Applications

This model uses a context window of 8k. It is compatible with different templates, like chatml and Llama's chat template.

Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling.

πŸ† Evaluation

The evaluation was performed using LLM AutoEval on Nous suite. It is the best 7B model to date.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/NeuralBeagle14-7B πŸ“„ 60.25 46.06 76.77 70.32 47.86
mlabonne/Beagle14-7B πŸ“„ 59.4 44.38 76.53 69.44 47.25
mlabonne/NeuralDaredevil-7B πŸ“„ 59.39 45.23 76.2 67.61 48.52
argilla/distilabeled-Marcoro14-7B-slerp πŸ“„ 58.93 45.38 76.48 65.68 48.18
mlabonne/NeuralMarcoro14-7B πŸ“„ 58.4 44.59 76.17 65.94 46.9
openchat/openchat-3.5-0106 πŸ“„ 53.71 44.17 73.72 52.53 44.4
teknium/OpenHermes-2.5-Mistral-7B πŸ“„ 52.42 42.75 72.99 52.99 40.94

You can find the complete benchmark on YALL - Yet Another LLM Leaderboard.

It's also on top of the Open LLM Leaderboard:

Compared to Beagle14, there's no improvement in this benchmark. This might be due to an unlucky run, but I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point. Another preference dataset could improve it even further. Note that the Beagle models perform better than Turdus, which is purposely contaminated on Winogrande (very high score).

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/NeuralBeagle14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Built with Distilabel

Downloads last month
580
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for mlabonne/NeuralBeagle14-7B-GGUF

Quantized
(8)
this model

Space using mlabonne/NeuralBeagle14-7B-GGUF 1