RLHF with ppo_Trainer and Lora
Hyperparameter
#ppo learning_rate=5e-6, batch_size=32, mini_batch_size=1, horizon=10000, cliprange =0.2, cliprange_value=0.2, lam=0.95, target_kl=2, use_score_scaling = True, log_with='wandb'
#lora r=16, lora_alpha=32, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM",
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.