|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
language: |
|
- ja |
|
datasets: |
|
- HachiML/self-rewarding_AIFT_MSv0.3_lora |
|
tags: |
|
- self-rewarding |
|
--- |
|
|
|
# Mistral-7B-v0.3-m3-lora |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
- [HachiML/Mistral-7B-v0.3-dpo-lora_sr_m3_lr1e-5_3ep](https://huggingface.co/HachiML/Mistral-7B-v0.3-dpo-lora_sr_m3_lr1e-5_3ep)のAdapterをマージしたモデル |
|
- This model is a fine-tuned version of [HachiML/Mistral-7B-v0.3-m2-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m2-lora) on following datasets. |
|
- [HachiML/self-rewarding_AIFT_MSv0.3_lora](https://huggingface.co/datasets/HachiML/self-rewarding_AIFT_MSv0.3_lora)(split=AIFT_M2) |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** [HachiML](https://huggingface.co/HachiML) |
|
- **Model type:** Mistral-7B |
|
- **Language(s) (NLP):** Japanese |
|
- **License:** Apache-2.0 |
|
- **Finetuned from model:** [HachiML/Mistral-7B-v0.3-m2-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m2-lora) |
|
- **Finetuned type:** DPO |
|
- **Finetuned dataset:** [HachiML/self-rewarding_AIFT_MSv0.3_lora](https://huggingface.co/datasets/HachiML/self-rewarding_AIFT_MSv0.3_lora)(split=AIFT_M2) |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.05 |
|
- num_epochs: 3 |
|
|
|
### Training results |
|
|
|
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/siseikatu8/huggingface/runs/wbj12r5j) |
|
|
|
### Framework versions |
|
|
|
- PEFT 0.11.1 |
|
- Transformers 4.41.0 |
|
- Pytorch 2.3.0+cu121 |
|
- Datasets 2.19.1 |
|
- Tokenizers 0.19.1 |