LoneStriker's picture
Upload folder using huggingface_hub
f9cb63e verified
|
raw
history blame
2.1 kB
---
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
pipeline_tag: text-generation
---
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models (https://arxiv.org/abs/2401.01335)
# zephyr-7b-sft-full-spin-iter3
This model is a self-play fine-tuned model at iteration 3 from [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) using synthetic data based on on the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
## Model Details
### Model Description
- Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: alignment-handbook/zephyr-7b-sft-full (based on mistralai/Mistral-7B-v0.1)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_UCLA-AGI__test_final)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 63.70 |
| ARC (25-shot) | 66.13 |
| HellaSwag (10-shot) | 85.85 |
| MMLU (5-shot) | 61.51 |
| TruthfulQA (0-shot) | 57.89 |
| Winogrande (5-shot) | 76.64 |
| GSM8K (5-shot) | 34.19 |
## Citation
```
@misc{chen2024selfplay,
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
year={2024},
eprint={2401.01335},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```