hamishivi commited on
Commit
e35bd7c
1 Parent(s): 99a263a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ model-index:
3
+ - name: tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm-mixed-prompts
4
+ results: []
5
+ datasets:
6
+ - allenai/tulu-2.5-preference-data
7
+ - allenai/tulu-v2-sft-mixture
8
+ language:
9
+ - en
10
+ base_model: allenai/tulu-2-dpo-13b
11
+ license: apache-2.0
12
+ ---
13
+ <center>
14
+ <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-2.5/tulu_25_banner.png" alt="Tulu 2.5 banner image" width="800px"/>
15
+ </center>
16
+
17
+ # Model Card for Tulu V2.5 PPO 13B - UltraFeedback Mean w. 70B UF RM & Mixed Prompts
18
+
19
+ Tulu is a series of language models that are trained to act as helpful assistants.
20
+ Tulu V2.5 is a series of models trained using DPO and PPO starting from the [Tulu 2 suite](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
21
+ To train this model, we used a 70B RM trained on the UltraFeedback data, and then used a mixture of prompts during PPO training.
22
+
23
+ For more details, read the paper:
24
+ [Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://link.todo).
25
+
26
+
27
+ ## .Model description
28
+
29
+ - **Model type:** One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
30
+ - **Language(s) (NLP):** English
31
+ - **License:** Apache 2.0.
32
+ - **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
33
+
34
+ ### Model Sources
35
+
36
+ - **Repository:** https://github.com/allenai/open-instruct
37
+ - **Dataset:** Prompts used to train this model during the PPO training can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-prompts) - specifically the `ultrafeedback_code_math_prompts` split.
38
+ - **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
39
+ - **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-70b-uf-rm), and the data used to train it [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split.
40
+
41
+
42
+ ## Input Format
43
+
44
+ The model is trained to use the following format (note the newlines):
45
+ ```
46
+ <|user|>
47
+ Your message here!
48
+ <|assistant|>
49
+ ```
50
+
51
+ For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
52
+ We have included a [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating) in the tokenizer implementing this template.
53
+
54
+ ## Intended uses & limitations
55
+
56
+ The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
57
+ We then further aligned the model with a [Jax PPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_ppo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the dataset mentioned above.
58
+
59
+ ## Bias, Risks, and Limitations
60
+
61
+ The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
62
+ It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
63
+
64
+
65
+ ### Training hyperparameters
66
+
67
+ The following hyperparameters were used during PPO training:
68
+ - learning_rate: 1e-06
69
+ - total_train_batch_size: 64
70
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
71
+ - lr_scheduler_type: linear
72
+ - lr_scheduler_warmup_ratio: 0.1
73
+ - num_epochs: 1.0
74
+ - KL penalty coefficient: 0.0325
75
+
76
+ ## Citation
77
+
78
+ If you find Tulu 2.5 is useful in your work, please cite it with:
79
+
80
+ ```
81
+ @misc{ivison2024unpacking,
82
+ title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}},
83
+ author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
84
+ year={2024},
85
+ archivePrefix={arXiv},
86
+ primaryClass={cs.CL}
87
+ }
88
+ ```