CharlesLi commited on
Commit
536bdce
1 Parent(s): 05f53ed

Model save

Browse files
README.md CHANGED
@@ -3,6 +3,7 @@ library_name: transformers
3
  tags:
4
  - trl
5
  - dpo
 
6
  - generated_from_trainer
7
  model-index:
8
  - name: OpenELM-1_1B-DPO-full-least-similar
@@ -16,15 +17,15 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model was trained from scratch on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 201.4325
20
- - Rewards/chosen: -684.0
21
- - Rewards/rejected: -592.0
22
- - Rewards/accuracies: 0.4277
23
- - Rewards/margins: -93.0
24
- - Logps/rejected: -59392.0
25
- - Logps/chosen: -68608.0
26
- - Logits/rejected: 5.5625
27
- - Logits/chosen: 5.1562
28
 
29
  ## Model description
30
 
@@ -61,39 +62,39 @@ The following hyperparameters were used during training:
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
63
  |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
64
- | 0.6914 | 0.1047 | 100 | 0.6991 | -0.3457 | -0.3398 | 0.4199 | -0.0053 | -322.0 | -352.0 | -9.4375 | -9.8125 |
65
- | 0.6914 | 0.2094 | 200 | 17.0172 | -58.5 | -51.25 | 0.4238 | -7.3125 | -5408.0 | -6176.0 | 5.25 | 4.5312 |
66
- | 0.6914 | 0.3141 | 300 | 97.3562 | -330.0 | -286.0 | 0.4316 | -44.75 | -28800.0 | -33280.0 | -0.9297 | -0.9531 |
67
- | 0.6914 | 0.4188 | 400 | 103.5919 | -352.0 | -304.0 | 0.4316 | -48.0 | -30592.0 | -35328.0 | 0.4180 | 0.2930 |
68
- | 0.6914 | 0.5236 | 500 | 109.8674 | -372.0 | -322.0 | 0.4336 | -50.75 | -32512.0 | -37632.0 | 1.4766 | 1.3047 |
69
- | 0.6914 | 0.6283 | 600 | 116.7363 | -396.0 | -342.0 | 0.4316 | -54.0 | -34560.0 | -39936.0 | 1.8828 | 1.6641 |
70
- | 0.6914 | 0.7330 | 700 | 123.6395 | -420.0 | -362.0 | 0.4336 | -57.25 | -36608.0 | -42240.0 | 3.1406 | 2.8438 |
71
- | 0.6914 | 0.8377 | 800 | 130.5069 | -442.0 | -382.0 | 0.4316 | -60.25 | -38400.0 | -44544.0 | 4.1562 | 3.7656 |
72
- | 0.6914 | 0.9424 | 900 | 137.3969 | -466.0 | -402.0 | 0.4277 | -63.5 | -40448.0 | -46848.0 | 4.25 | 3.8594 |
73
- | 0.6914 | 1.0471 | 1000 | 143.7038 | -488.0 | -422.0 | 0.4297 | -66.5 | -42496.0 | -49152.0 | 5.6875 | 5.1562 |
74
- | 0.6914 | 1.1518 | 1100 | 150.1531 | -510.0 | -440.0 | 0.4297 | -69.5 | -44288.0 | -51200.0 | 6.5938 | 6.0 |
75
- | 0.6914 | 1.2565 | 1200 | 156.7057 | -532.0 | -460.0 | 0.4297 | -72.5 | -46336.0 | -53504.0 | 5.6875 | 5.1562 |
76
- | 0.6914 | 1.3613 | 1300 | 162.7056 | -552.0 | -476.0 | 0.4336 | -75.0 | -48128.0 | -55552.0 | 5.5938 | 5.1562 |
77
- | 0.6914 | 1.4660 | 1400 | 168.4744 | -572.0 | -494.0 | 0.4316 | -77.5 | -49664.0 | -57600.0 | 5.8438 | 5.3438 |
78
- | 0.6914 | 1.5707 | 1500 | 173.8489 | -588.0 | -510.0 | 0.4297 | -80.0 | -51200.0 | -59392.0 | 6.0312 | 5.5312 |
79
- | 0.6914 | 1.6754 | 1600 | 178.6226 | -608.0 | -524.0 | 0.4336 | -82.5 | -52736.0 | -60928.0 | 5.8438 | 5.375 |
80
- | 0.6914 | 1.7801 | 1700 | 183.2413 | -620.0 | -536.0 | 0.4297 | -84.5 | -54016.0 | -62464.0 | 5.7812 | 5.3125 |
81
- | 0.6914 | 1.8848 | 1800 | 186.8875 | -636.0 | -548.0 | 0.4277 | -86.0 | -55040.0 | -64000.0 | 5.5938 | 5.1562 |
82
- | 0.6914 | 1.9895 | 1900 | 190.4393 | -648.0 | -560.0 | 0.4316 | -87.5 | -56064.0 | -65024.0 | 5.8125 | 5.375 |
83
- | 0.6914 | 2.0942 | 2000 | 193.2805 | -656.0 | -568.0 | 0.4297 | -89.0 | -57088.0 | -66048.0 | 5.5312 | 5.125 |
84
- | 0.6914 | 2.1990 | 2100 | 195.6470 | -664.0 | -576.0 | 0.4277 | -90.5 | -57600.0 | -66560.0 | 5.4688 | 5.0625 |
85
- | 0.6914 | 2.3037 | 2200 | 197.7068 | -672.0 | -580.0 | 0.4238 | -91.0 | -58368.0 | -67584.0 | 5.4688 | 5.0625 |
86
- | 0.6914 | 2.4084 | 2300 | 199.1925 | -676.0 | -584.0 | 0.4238 | -92.0 | -58880.0 | -68096.0 | 5.5 | 5.125 |
87
- | 0.6914 | 2.5131 | 2400 | 200.0977 | -680.0 | -588.0 | 0.4258 | -92.5 | -59136.0 | -68096.0 | 5.5312 | 5.125 |
88
- | 0.6914 | 2.6178 | 2500 | 200.9000 | -684.0 | -588.0 | 0.4277 | -92.5 | -59392.0 | -68608.0 | 5.5625 | 5.1562 |
89
- | 0.6914 | 2.7225 | 2600 | 201.1795 | -684.0 | -592.0 | 0.4277 | -92.5 | -59392.0 | -68608.0 | 5.5938 | 5.1875 |
90
- | 0.6914 | 2.8272 | 2700 | 201.3105 | -684.0 | -592.0 | 0.4277 | -93.0 | -59392.0 | -68608.0 | 5.5938 | 5.1875 |
91
- | 0.6914 | 2.9319 | 2800 | 201.4325 | -684.0 | -592.0 | 0.4277 | -93.0 | -59392.0 | -68608.0 | 5.5625 | 5.1562 |
92
 
93
 
94
  ### Framework versions
95
 
96
  - Transformers 4.44.2
97
  - Pytorch 2.3.0
98
- - Datasets 2.21.0
99
  - Tokenizers 0.19.1
 
3
  tags:
4
  - trl
5
  - dpo
6
+ - alignment-handbook
7
  - generated_from_trainer
8
  model-index:
9
  - name: OpenELM-1_1B-DPO-full-least-similar
 
17
 
18
  This model was trained from scratch on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.0088
21
+ - Rewards/chosen: -3.4062
22
+ - Rewards/rejected: -3.6406
23
+ - Rewards/accuracies: 0.5078
24
+ - Rewards/margins: 0.2354
25
+ - Logps/rejected: -652.0
26
+ - Logps/chosen: -660.0
27
+ - Logits/rejected: -13.375
28
+ - Logits/chosen: -13.625
29
 
30
  ## Model description
31
 
 
62
 
63
  | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
64
  |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
65
+ | 0.1864 | 0.1047 | 100 | 0.6764 | -0.4473 | -0.5547 | 0.5410 | 0.1084 | -344.0 | -364.0 | -14.5 | -14.6875 |
66
+ | 0.1156 | 0.2094 | 200 | 0.6999 | -0.8008 | -0.8984 | 0.5234 | 0.0991 | -378.0 | -398.0 | -11.4375 | -11.6875 |
67
+ | 0.1465 | 0.3141 | 300 | 0.7544 | -1.0547 | -1.1641 | 0.4883 | 0.1079 | -406.0 | -424.0 | -10.9375 | -11.375 |
68
+ | 0.1468 | 0.4188 | 400 | 0.7513 | -1.1016 | -1.1328 | 0.4707 | 0.0322 | -402.0 | -428.0 | -15.1875 | -15.25 |
69
+ | 0.1267 | 0.5236 | 500 | 0.7808 | -1.4219 | -1.4844 | 0.5176 | 0.0574 | -436.0 | -460.0 | -13.125 | -13.375 |
70
+ | 0.1435 | 0.6283 | 600 | 0.8511 | -1.7109 | -1.7422 | 0.5 | 0.0308 | -462.0 | -490.0 | -15.625 | -15.6875 |
71
+ | 0.1255 | 0.7330 | 700 | 0.8534 | -1.8438 | -2.0312 | 0.5137 | 0.1797 | -492.0 | -502.0 | -13.125 | -13.5625 |
72
+ | 0.1142 | 0.8377 | 800 | 0.8559 | -1.5938 | -1.6406 | 0.4922 | 0.0471 | -452.0 | -478.0 | -11.875 | -12.375 |
73
+ | 0.1596 | 0.9424 | 900 | 0.8358 | -2.0469 | -2.0938 | 0.4922 | 0.0510 | -498.0 | -524.0 | -14.875 | -15.0625 |
74
+ | 0.0231 | 1.0471 | 1000 | 0.8380 | -1.8828 | -2.0312 | 0.5078 | 0.1406 | -492.0 | -506.0 | -12.0625 | -12.5 |
75
+ | 0.0293 | 1.1518 | 1100 | 0.8317 | -2.2656 | -2.3906 | 0.5488 | 0.1328 | -528.0 | -544.0 | -12.4375 | -12.875 |
76
+ | 0.0171 | 1.2565 | 1200 | 0.8362 | -2.375 | -2.5156 | 0.5332 | 0.1367 | -540.0 | -556.0 | -12.1875 | -12.625 |
77
+ | 0.0174 | 1.3613 | 1300 | 0.8684 | -2.9375 | -3.0938 | 0.5195 | 0.1660 | -600.0 | -612.0 | -11.75 | -12.25 |
78
+ | 0.0191 | 1.4660 | 1400 | 0.8702 | -2.6875 | -2.8438 | 0.5156 | 0.1504 | -572.0 | -588.0 | -11.5625 | -12.0 |
79
+ | 0.036 | 1.5707 | 1500 | 0.9229 | -2.5781 | -2.7188 | 0.5098 | 0.1348 | -560.0 | -576.0 | -11.3125 | -11.8125 |
80
+ | 0.018 | 1.6754 | 1600 | 0.9154 | -2.8281 | -2.9844 | 0.5059 | 0.1523 | -588.0 | -600.0 | -12.4375 | -12.875 |
81
+ | 0.0304 | 1.7801 | 1700 | 0.9087 | -2.375 | -2.4531 | 0.4902 | 0.0820 | -536.0 | -556.0 | -13.5 | -13.75 |
82
+ | 0.0189 | 1.8848 | 1800 | 0.8959 | -2.6719 | -2.8125 | 0.5039 | 0.1367 | -568.0 | -584.0 | -13.0 | -13.375 |
83
+ | 0.0113 | 1.9895 | 1900 | 0.9094 | -2.7812 | -2.9531 | 0.5117 | 0.1738 | -584.0 | -596.0 | -13.375 | -13.625 |
84
+ | 0.0013 | 2.0942 | 2000 | 0.9535 | -2.9688 | -3.1875 | 0.5137 | 0.2207 | -608.0 | -616.0 | -13.5 | -13.75 |
85
+ | 0.0014 | 2.1990 | 2100 | 0.9798 | -3.1562 | -3.375 | 0.5176 | 0.2285 | -628.0 | -632.0 | -13.3125 | -13.5625 |
86
+ | 0.0013 | 2.3037 | 2200 | 0.9978 | -3.3281 | -3.5781 | 0.5137 | 0.2402 | -648.0 | -652.0 | -13.3125 | -13.5625 |
87
+ | 0.0011 | 2.4084 | 2300 | 1.0081 | -3.4531 | -3.6875 | 0.5137 | 0.2471 | -660.0 | -664.0 | -13.3125 | -13.5625 |
88
+ | 0.0017 | 2.5131 | 2400 | 1.0045 | -3.4219 | -3.6719 | 0.5117 | 0.2461 | -656.0 | -660.0 | -13.3125 | -13.5625 |
89
+ | 0.0013 | 2.6178 | 2500 | 1.0019 | -3.4062 | -3.6562 | 0.5117 | 0.2432 | -656.0 | -660.0 | -13.3125 | -13.5625 |
90
+ | 0.0038 | 2.7225 | 2600 | 1.0071 | -3.4062 | -3.6406 | 0.5059 | 0.2354 | -652.0 | -660.0 | -13.375 | -13.625 |
91
+ | 0.0009 | 2.8272 | 2700 | 1.0082 | -3.4062 | -3.6406 | 0.5098 | 0.2363 | -652.0 | -660.0 | -13.375 | -13.625 |
92
+ | 0.0011 | 2.9319 | 2800 | 1.0088 | -3.4062 | -3.6406 | 0.5078 | 0.2354 | -652.0 | -660.0 | -13.375 | -13.625 |
93
 
94
 
95
  ### Framework versions
96
 
97
  - Transformers 4.44.2
98
  - Pytorch 2.3.0
99
+ - Datasets 3.0.0
100
  - Tokenizers 0.19.1
all_results.json CHANGED
@@ -1,9 +1,22 @@
1
  {
2
  "epoch": 3.0,
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  "total_flos": 0.0,
4
- "train_loss": 0.69140625,
5
- "train_runtime": 12294.033,
6
  "train_samples": 61118,
7
- "train_samples_per_second": 14.914,
8
- "train_steps_per_second": 0.233
9
  }
 
1
  {
2
  "epoch": 3.0,
3
+ "eval_logits/chosen": 5.15625,
4
+ "eval_logits/rejected": 5.5625,
5
+ "eval_logps/chosen": -68608.0,
6
+ "eval_logps/rejected": -59392.0,
7
+ "eval_loss": 201.41949462890625,
8
+ "eval_rewards/accuracies": 0.427734375,
9
+ "eval_rewards/chosen": -684.0,
10
+ "eval_rewards/margins": -93.0,
11
+ "eval_rewards/rejected": -592.0,
12
+ "eval_runtime": 46.9081,
13
+ "eval_samples": 2000,
14
+ "eval_samples_per_second": 42.637,
15
+ "eval_steps_per_second": 0.682,
16
  "total_flos": 0.0,
17
+ "train_loss": 0.06583595570358879,
18
+ "train_runtime": 12397.8277,
19
  "train_samples": 61118,
20
+ "train_samples_per_second": 14.789,
21
+ "train_steps_per_second": 0.231
22
  }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_logits/chosen": 5.15625,
4
+ "eval_logits/rejected": 5.5625,
5
+ "eval_logps/chosen": -68608.0,
6
+ "eval_logps/rejected": -59392.0,
7
+ "eval_loss": 201.41949462890625,
8
+ "eval_rewards/accuracies": 0.427734375,
9
+ "eval_rewards/chosen": -684.0,
10
+ "eval_rewards/margins": -93.0,
11
+ "eval_rewards/rejected": -592.0,
12
+ "eval_runtime": 46.9081,
13
+ "eval_samples": 2000,
14
+ "eval_samples_per_second": 42.637,
15
+ "eval_steps_per_second": 0.682
16
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d60ed799a3581abdeb65bfc52611fff846fe628e6e7dc5c1d6afee1b8b81851
3
  size 2159808696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55c35931741de17bfac79f2b793c9a41830572b062e2b91815c973413370841a
3
  size 2159808696
runs/Sep09_19-10-06_xe8545-a100-14/events.out.tfevents.1725915044.xe8545-a100-14.547977.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26f9c6b0eccb2c55e18b4287d15ffe729ec3632d681d76e6b248b30b789302fa
3
+ size 828
runs/Sep22_19-02-28_xe8545-a100-05/events.out.tfevents.1727025802.xe8545-a100-05.377977.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:058e0c3b50291efa62feb8deada8bd43629be8cf45cb44337ae1f4ff0b6691d4
3
+ size 225670
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 3.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.69140625,
5
- "train_runtime": 12294.033,
6
  "train_samples": 61118,
7
- "train_samples_per_second": 14.914,
8
- "train_steps_per_second": 0.233
9
  }
 
1
  {
2
  "epoch": 3.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.06583595570358879,
5
+ "train_runtime": 12397.8277,
6
  "train_samples": 61118,
7
+ "train_samples_per_second": 14.789,
8
+ "train_steps_per_second": 0.231
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8ab32301159d7b893c276b25f86715b83587253c13cb0c7b019525287ab2e281
3
- size 7544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2803161e7143daf1925ad7c404dbd38cf4430f660a37a21c77856feaffdabca1
3
+ size 7608