--- license: mit base_model: vicgalle/gpt2-open-instruct-v1 tags: - generated_from_trainer - Transformers - GPT2 model-index: - name: hh-rlhf results: [] datasets: - Anthropic/hh-rlhf - hakurei/open-instruct-v1 tokenizers: - GPT2Tokenizer language: - en library_name: transformers metrics: - bleu --- # hh-rlhf This model is a fine-tuned version of [vicgalle/gpt2-open-instruct-v1](https://huggingface.co/vicgalle/gpt2-open-instruct-v1) on an subset (15k) of the Anthropic/hh-rlhf dataset. It achieves the following results on the evaluation set: - Loss: 2.1534 ## Model description GPT2 open instruct was trained on the open-instruct dataset fully. The reimagines one LM head as a partial rhlf adapter, with subtle reinforcements. ## Intended uses & limitations Intended to study the intersection of instruct models and prompting that focuses on subtle exchanges of prompting. This probably needs to be refined substantially at this point. ## Training and evaluation data ```python Train dataset size: 15000 Test dataset size: 500 Dataset({ features: ['chosen', 'rejected'], num_rows: 15000 }) Dataset({ features: ['chosen', 'rejected'], num_rows: 500 }) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3108 | 1.0 | 7500 | 2.1799 | | 2.265 | 2.0 | 15000 | 2.1632 | | 2.2507 | 3.0 | 22500 | 2.1567 | | 2.2519 | 4.0 | 30000 | 2.1534 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3