RyanYr commited on
Commit
3cf8b0d
1 Parent(s): 9c11f40

Model save

Browse files
Files changed (2) hide show
  1. README.md +74 -0
  2. generation_config.json +12 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ base_model: RyanYr/reward-judge_SFT-genRM_pilot-exp
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: reward-judge_iter-dpo-genRM_pilot-exp_iter1
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # reward-judge_iter-dpo-genRM_pilot-exp_iter1
17
+
18
+ This model is a fine-tuned version of [RyanYr/reward-judge_SFT-genRM_pilot-exp](https://huggingface.co/RyanYr/reward-judge_SFT-genRM_pilot-exp) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.2657
21
+ - Rewards/chosen: -1.3929
22
+ - Rewards/rejected: -3.4254
23
+ - Rewards/accuracies: 0.8400
24
+ - Rewards/margins: 2.0324
25
+ - Logps/rejected: -188.0495
26
+ - Logps/chosen: -189.5504
27
+ - Logits/rejected: -0.7024
28
+ - Logits/chosen: -0.6169
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 2e-07
48
+ - train_batch_size: 1
49
+ - eval_batch_size: 1
50
+ - seed: 42
51
+ - distributed_type: multi-GPU
52
+ - num_devices: 4
53
+ - gradient_accumulation_steps: 128
54
+ - total_train_batch_size: 512
55
+ - total_eval_batch_size: 4
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: cosine
58
+ - lr_scheduler_warmup_steps: 100
59
+ - num_epochs: 2
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.5701 | 0.9370 | 75 | 0.5377 | 0.1101 | -0.2637 | 0.8400 | 0.3738 | -156.4332 | -174.5201 | -0.6949 | -0.6098 |
66
+ | 0.2655 | 1.8739 | 150 | 0.2657 | -1.3929 | -3.4254 | 0.8400 | 2.0324 | -188.0495 | -189.5504 | -0.7024 | -0.6169 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.43.4
72
+ - Pytorch 2.1.2+cu121
73
+ - Datasets 2.14.6
74
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 128000,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 128001,
6
+ 128008,
7
+ 128009
8
+ ],
9
+ "temperature": 0.6,
10
+ "top_p": 0.9,
11
+ "transformers_version": "4.43.4"
12
+ }