HachiML's picture
Update README.md
ec333a1 verified
|
raw
history blame
2.06 kB
metadata
library_name: transformers
license: apache-2.0
language:
  - ja
datasets:
  - HachiML/self-rewarding_AIFT_MSv0.3_lora
tags:
  - self-rewarding

Mistral-7B-v0.3-m3-lora

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 3

Training results

Visualize in Weights & Biases

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1