Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ For more details, read the paper:
|
|
37 |
- **Dataset:** Prompts used to train this model during the PPO training can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-prompts) - specifically the `ultrafeedback_code_math_prompts` split.
|
38 |
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
|
39 |
- **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-70b-uf-rm), and the data used to train it [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split.
|
40 |
-
|
41 |
|
42 |
## Input Format
|
43 |
|
|
|
37 |
- **Dataset:** Prompts used to train this model during the PPO training can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-prompts) - specifically the `ultrafeedback_code_math_prompts` split.
|
38 |
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
|
39 |
- **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-70b-uf-rm), and the data used to train it [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split.
|
40 |
+
- **Value Model:** The value model produced during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-uf-rm-mixed-prompts-value).
|
41 |
|
42 |
## Input Format
|
43 |
|