Safetensors
English
llama
Edit model card
Tulu 2.5 banner image

Model Card for Llama 3 Tulu V2.5 PPO 13B - UltraFeedback Mean w. 8B UltraFeedback RM

Tulu is a series of language models that are trained to act as helpful assistants. Tulu V2.5 is a series of models trained using DPO and PPO starting from the Tulu 2 suite. This model is trained on the UltraFeedback dataset (using the per-aspect/fine-grained scores for deciding chosen and rejected) using PPO. We used a 8B RM trained on the UltraFeedback dataset, and then used the UltraFeedback prompts during PPO training.

This is part of a small update to the original V2.5 suite, adding some Llama 3-based models. We add three models:

For more details, read the paper: Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback.

Built with Meta Llama 3! Note that Llama 3 is released under the Meta Llama 3 community license, included here under llama_3_license.txt.

.Model description

  • Model type: One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
  • Language(s) (NLP): English
  • License: Apache 2.0.
  • Finetuned from model: meta-llama/Llama-2-13b-hf

Model Sources

  • Repository: https://github.com/allenai/open-instruct
  • Dataset: Data used to train this model can be found here - specifically the ultrafeedback_mean_aspects split. Only the prompts were used.
  • Model Family: The collection of related models can be found here.
  • Reward Model: The reward model used during PPO training can be found here, and the data used to train it here - specifically the ultrafeedback_mean_aspects split.

Results

This is a model trained on Llama 3 as an update to the Tulu v2.5 suite. For details on training and evaluation, read our paper!

Model Size Alignment GSM8k 8-shot CoT Acc. AlpacaEval 2 Winrate (LC)
Tulu V2.5 PPO Llama 3 8B (this model) 8B PPO with 8B RM 61.5 22.7
Tulu V2.5 PPO 13B 13B PPO with 70B RM 58.0 26.7
Tulu V2 DPO 13B 13B DPO 50.5 16.0
Tulu V2 SFT 13B 13B - 46.0 10.4
Tulu V2 DPO 70B 70B DPO 71.5 21.2

Input Format

The model is trained to use the following format (note the newlines):

<|user|>
Your message here!
<|assistant|>

For best results, format all inputs in this manner. Make sure to include a newline after <|assistant|>, this can affect generation quality quite a bit. We have included a chat template in the tokenizer implementing this template.

Model Family

Preference Data, Prompts Data DPO Models PPO Models Reward Models Value Models
ultrafeedback_mean_aspects tulu-v2.5-dpo-13b-uf-mean tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm tulu-v2.5-70b-uf-rm tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-value
preference_big_mixture = tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm tulu-v2.5-13b-preference-mix-rm tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm-value
preference_big_mixture = tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm tulu-v2.5-70b-preference-mix-rm tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm-value
ultrafeedback_mean_aspects = tulu-v2.5-ppo-13b-uf-mean tulu-v2.5-13b-uf-rm tulu-v2.5-ppo-13b-uf-mean-13b-uf-rm-value
ultrafeedback_mean_aspects = tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-mixed-prompts tulu-v2.5-70b-uf-rm * with extra prompts tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-mixed-prompts-value
hh_rlhf_60k tulu-v2.5-dpo-13b-hh-rlhf-60k tulu-v2.5-ppo-13b-hh-rlhf-60k tulu-v2.5-13b-hh-rlhf-60k-rm
chatbot_arena_2023 tulu-v2.5-dpo-13b-chatbot-arena-2023 tulu-v2.5-ppo-13b-chatbot-arena-2023 tulu-v2.5-13b-chatbot-arena-2023-rm
stack_exchange_60k tulu-v2.5-dpo-13b-stackexchange-60k tulu-v2.5-ppo-13b-stackexchange-60k tulu-v2.5-13b-stackexchange-60k-rm
nectar_60k N/A tulu-v2.5-ppo-13b-nectar-60k tulu-v2.5-13b-nectar-60k-rm
nectar tulu-v2.5-dpo-13b-nectar
helpsteer tulu-v2.5-dpo-13b-helpsteer
shp2 tulu-v2.5-dpo-13b-shp2
stack_exchange_paired tulu-v2.5-dpo-13b-stackexchange
ultrafeedback_overall tulu-v2.5-dpo-13b-uf-overall
capybara tulu-v2.5-dpo-13b-capybara
prm800k_pairs_phase2 tulu-v2.5-dpo-13b-prm-phase-2
hh_rlhf tulu-v2.5-dpo-13b-hh-rlhf
chatbot_arena_2024 tulu-v2.5-dpo-13b-chatbot-arena-2024
alpaca_farm_human_pref tulu-v2.5-dpo-13b-alpacafarm-human-pref
alpaca_farm_gpt4_pref tulu-v2.5-dpo-13b-alpacafarm-gpt4-pref
orca_dpo_pairs tulu-v2.5-dpo-13b-argilla-orca-pairs

*The extra prompts are all the prompts in the prompts dataset. Default only uses the split ultrafeedback_prompts.

Intended uses & limitations

The model was initially fine-tuned on a filtered and preprocessed of the Tulu V2 mix dataset, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further aligned the model with a Jax DPO trainer built on EasyLM on the dataset mentioned above.

Bias, Risks, and Limitations

The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.

Training hyperparameters

The following hyperparameters were used during PPO training:

  • learning_rate: 1e-06
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1.0
  • KL penalty coefficient: 0.05

Citation

If you find Tulu 2.5 is useful in your work, please cite it with:

@misc{ivison2024unpacking,
      title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}}, 
      author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
      year={2024},
      eprint={2406.09279},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
18
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for allenai/llama-3-tulu-v2.5-8b-uf-mean-8b-uf-rm

Finetuned
(5)
this model

Datasets used to train allenai/llama-3-tulu-v2.5-8b-uf-mean-8b-uf-rm

Collection including allenai/llama-3-tulu-v2.5-8b-uf-mean-8b-uf-rm