jikaixuan commited on
Commit
6366e7e
1 Parent(s): 251136a

Model save

Browse files
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: mistralai/Mistral-7B-v0.1
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: zephyr-7b-dpo-qlora
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # zephyr-7b-dpo-qlora
15
+
16
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.0136
19
+ - Rewards/chosen: -376.5464
20
+ - Rewards/rejected: -330.4243
21
+ - Rewards/accuracies: 0.4544
22
+ - Rewards/margins: -46.1221
23
+ - Logps/rejected: -33295.5859
24
+ - Logps/chosen: -37927.6367
25
+ - Neglected: 256.0
26
+ - Selected: 0.0
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-06
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 8
48
+ - seed: 42
49
+ - distributed_type: multi-GPU
50
+ - num_devices: 4
51
+ - gradient_accumulation_steps: 4
52
+ - total_train_batch_size: 64
53
+ - total_eval_batch_size: 32
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: cosine
56
+ - lr_scheduler_warmup_ratio: 0.1
57
+ - num_epochs: 1
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Neglected | Selected |
62
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------:|:--------:|
63
+ | 0.6727 | 0.1 | 100 | 0.6631 | 0.0074 | -0.0332 | 0.7024 | 0.0405 | -256.4745 | -272.2623 | 256.0 | 0.0 |
64
+ | 0.0392 | 0.21 | 200 | 0.0276 | -119.9914 | -105.4188 | 0.4464 | -14.5726 | -10795.0420 | -12272.1426 | 256.0 | 0.0 |
65
+ | 0.0208 | 0.31 | 300 | 0.0199 | -281.3865 | -245.2151 | 0.4444 | -36.1714 | -24774.6660 | -28411.6465 | 256.0 | 0.0 |
66
+ | 0.0157 | 0.42 | 400 | 0.0161 | -353.7562 | -307.1862 | 0.4563 | -46.5699 | -30971.7832 | -35648.6172 | 256.0 | 0.0 |
67
+ | 0.0182 | 0.52 | 500 | 0.0148 | -331.5956 | -289.6645 | 0.4464 | -41.9311 | -29219.6113 | -33432.5625 | 256.0 | 0.0 |
68
+ | 0.013 | 0.63 | 600 | 0.0143 | -356.6841 | -312.4188 | 0.4544 | -44.2654 | -31495.0312 | -35941.4141 | 256.0 | 0.0 |
69
+ | 0.0165 | 0.73 | 700 | 0.0143 | -353.6940 | -310.5345 | 0.4504 | -43.1595 | -31306.6094 | -35642.4023 | 256.0 | 0.0 |
70
+ | 0.0145 | 0.84 | 800 | 0.0135 | -374.0797 | -328.2772 | 0.4544 | -45.8026 | -33080.8789 | -37680.9766 | 256.0 | 0.0 |
71
+ | 0.0195 | 0.94 | 900 | 0.0137 | -376.5184 | -330.4032 | 0.4544 | -46.1152 | -33293.4727 | -37924.8398 | 256.0 | 0.0 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.35.0
77
+ - Pytorch 2.1.1+cu121
78
+ - Datasets 2.14.6
79
+ - Tokenizers 0.14.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:199ac48fea6b0c6e338ee63198970641e9bca8639d32bb77b1b9a2472f4ed062
3
  size 83945744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cb5c6fc7d4bdfeb0a0cf7462bf351aef0423bfa34b4b3b3c04ea9d593fe5fa3
3
  size 83945744
all_results.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logps/chosen": -37927.63671875,
4
+ "eval_logps/rejected": -33295.5859375,
5
+ "eval_loss": 0.013621850870549679,
6
+ "eval_neglected": 256.0,
7
+ "eval_rewards/accuracies": 0.454365074634552,
8
+ "eval_rewards/chosen": -376.54638671875,
9
+ "eval_rewards/margins": -46.1220817565918,
10
+ "eval_rewards/rejected": -330.4242858886719,
11
+ "eval_runtime": 968.279,
12
+ "eval_samples": 2000,
13
+ "eval_samples_per_second": 2.066,
14
+ "eval_selected": 0.0,
15
+ "eval_steps_per_second": 0.065,
16
+ "train_loss": 0.11865393293933718,
17
+ "train_runtime": 42177.7253,
18
+ "train_samples": 61135,
19
+ "train_samples_per_second": 1.449,
20
+ "train_steps_per_second": 0.023
21
+ }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_logps/chosen": -37927.63671875,
4
+ "eval_logps/rejected": -33295.5859375,
5
+ "eval_loss": 0.013621850870549679,
6
+ "eval_neglected": 256.0,
7
+ "eval_rewards/accuracies": 0.454365074634552,
8
+ "eval_rewards/chosen": -376.54638671875,
9
+ "eval_rewards/margins": -46.1220817565918,
10
+ "eval_rewards/rejected": -330.4242858886719,
11
+ "eval_runtime": 968.279,
12
+ "eval_samples": 2000,
13
+ "eval_samples_per_second": 2.066,
14
+ "eval_selected": 0.0,
15
+ "eval_steps_per_second": 0.065
16
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.11865393293933718,
4
+ "train_runtime": 42177.7253,
5
+ "train_samples": 61135,
6
+ "train_samples_per_second": 1.449,
7
+ "train_steps_per_second": 0.023
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1516 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9997382884061764,
5
+ "eval_steps": 100,
6
+ "global_step": 955,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 5.208333333333333e-08,
14
+ "logps/chosen": -304.8013916015625,
15
+ "logps/rejected": -229.5030517578125,
16
+ "loss": 0.6931,
17
+ "neglected": 10.0,
18
+ "rewards/accuracies": 0.0,
19
+ "rewards/chosen": 0.0,
20
+ "rewards/margins": 0.0,
21
+ "rewards/rejected": 0.0,
22
+ "selected": 0.0,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.01,
27
+ "learning_rate": 5.208333333333334e-07,
28
+ "logps/chosen": -313.4251708984375,
29
+ "logps/rejected": -277.2637023925781,
30
+ "loss": 0.693,
31
+ "neglected": 90.0,
32
+ "rewards/accuracies": 0.4861111044883728,
33
+ "rewards/chosen": 8.512949716532603e-05,
34
+ "rewards/margins": 0.0002041187253780663,
35
+ "rewards/rejected": -0.00011898923548869789,
36
+ "selected": 0.0,
37
+ "step": 10
38
+ },
39
+ {
40
+ "epoch": 0.02,
41
+ "learning_rate": 1.0416666666666667e-06,
42
+ "logps/chosen": -229.010986328125,
43
+ "logps/rejected": -232.58932495117188,
44
+ "loss": 0.6931,
45
+ "neglected": 242.0,
46
+ "rewards/accuracies": 0.4749999940395355,
47
+ "rewards/chosen": 0.00024582125479355454,
48
+ "rewards/margins": 0.0001970421290025115,
49
+ "rewards/rejected": 4.8779074859339744e-05,
50
+ "selected": 0.0,
51
+ "step": 20
52
+ },
53
+ {
54
+ "epoch": 0.03,
55
+ "learning_rate": 1.5625e-06,
56
+ "logps/chosen": -270.5987548828125,
57
+ "logps/rejected": -244.8210906982422,
58
+ "loss": 0.693,
59
+ "neglected": 402.0,
60
+ "rewards/accuracies": 0.5,
61
+ "rewards/chosen": -2.5821285817073658e-05,
62
+ "rewards/margins": 0.00013345989282242954,
63
+ "rewards/rejected": -0.00015928114589769393,
64
+ "selected": 0.0,
65
+ "step": 30
66
+ },
67
+ {
68
+ "epoch": 0.04,
69
+ "learning_rate": 2.0833333333333334e-06,
70
+ "logps/chosen": -270.58099365234375,
71
+ "logps/rejected": -263.6953125,
72
+ "loss": 0.6928,
73
+ "neglected": 562.0,
74
+ "rewards/accuracies": 0.5375000238418579,
75
+ "rewards/chosen": 0.0004904457600787282,
76
+ "rewards/margins": 0.0006916436250321567,
77
+ "rewards/rejected": -0.00020119785040151328,
78
+ "selected": 0.0,
79
+ "step": 40
80
+ },
81
+ {
82
+ "epoch": 0.05,
83
+ "learning_rate": 2.604166666666667e-06,
84
+ "logps/chosen": -255.3240509033203,
85
+ "logps/rejected": -249.5677490234375,
86
+ "loss": 0.6925,
87
+ "neglected": 722.0,
88
+ "rewards/accuracies": 0.581250011920929,
89
+ "rewards/chosen": 0.0003948220401071012,
90
+ "rewards/margins": 0.001298406976275146,
91
+ "rewards/rejected": -0.0009035851107910275,
92
+ "selected": 0.0,
93
+ "step": 50
94
+ },
95
+ {
96
+ "epoch": 0.06,
97
+ "learning_rate": 3.125e-06,
98
+ "logps/chosen": -278.1262512207031,
99
+ "logps/rejected": -257.60858154296875,
100
+ "loss": 0.6921,
101
+ "neglected": 882.0,
102
+ "rewards/accuracies": 0.6187499761581421,
103
+ "rewards/chosen": 0.001432361314073205,
104
+ "rewards/margins": 0.0019829857628792524,
105
+ "rewards/rejected": -0.000550624099560082,
106
+ "selected": 0.0,
107
+ "step": 60
108
+ },
109
+ {
110
+ "epoch": 0.07,
111
+ "learning_rate": 3.6458333333333333e-06,
112
+ "logps/chosen": -285.159423828125,
113
+ "logps/rejected": -263.88519287109375,
114
+ "loss": 0.6909,
115
+ "neglected": 1042.0,
116
+ "rewards/accuracies": 0.7124999761581421,
117
+ "rewards/chosen": 0.0027831769548356533,
118
+ "rewards/margins": 0.004454310052096844,
119
+ "rewards/rejected": -0.0016711335629224777,
120
+ "selected": 0.0,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.08,
125
+ "learning_rate": 4.166666666666667e-06,
126
+ "logps/chosen": -292.46826171875,
127
+ "logps/rejected": -263.4407043457031,
128
+ "loss": 0.6879,
129
+ "neglected": 1202.0,
130
+ "rewards/accuracies": 0.706250011920929,
131
+ "rewards/chosen": 0.0038231350481510162,
132
+ "rewards/margins": 0.009836939163506031,
133
+ "rewards/rejected": -0.006013805046677589,
134
+ "selected": 0.0,
135
+ "step": 80
136
+ },
137
+ {
138
+ "epoch": 0.09,
139
+ "learning_rate": 4.6875000000000004e-06,
140
+ "logps/chosen": -280.253662109375,
141
+ "logps/rejected": -257.877197265625,
142
+ "loss": 0.6827,
143
+ "neglected": 1362.0,
144
+ "rewards/accuracies": 0.71875,
145
+ "rewards/chosen": 0.007530213333666325,
146
+ "rewards/margins": 0.01851402223110199,
147
+ "rewards/rejected": -0.010983810760080814,
148
+ "selected": 0.0,
149
+ "step": 90
150
+ },
151
+ {
152
+ "epoch": 0.1,
153
+ "learning_rate": 4.9997324926814375e-06,
154
+ "logps/chosen": -262.7320251464844,
155
+ "logps/rejected": -284.05206298828125,
156
+ "loss": 0.6727,
157
+ "neglected": 1522.0,
158
+ "rewards/accuracies": 0.699999988079071,
159
+ "rewards/chosen": 0.008031011559069157,
160
+ "rewards/margins": 0.026352444663643837,
161
+ "rewards/rejected": -0.018321430310606956,
162
+ "selected": 0.0,
163
+ "step": 100
164
+ },
165
+ {
166
+ "epoch": 0.1,
167
+ "eval_logps/chosen": -272.2622985839844,
168
+ "eval_logps/rejected": -256.4744873046875,
169
+ "eval_loss": 0.6630775332450867,
170
+ "eval_neglected": 256.0,
171
+ "eval_rewards/accuracies": 0.7023809552192688,
172
+ "eval_rewards/chosen": 0.0073691424913704395,
173
+ "eval_rewards/margins": 0.040533702820539474,
174
+ "eval_rewards/rejected": -0.03316456079483032,
175
+ "eval_runtime": 667.065,
176
+ "eval_samples_per_second": 2.998,
177
+ "eval_selected": 0.0,
178
+ "eval_steps_per_second": 0.094,
179
+ "step": 100
180
+ },
181
+ {
182
+ "epoch": 0.12,
183
+ "learning_rate": 4.996723692767927e-06,
184
+ "logps/chosen": -256.07342529296875,
185
+ "logps/rejected": -245.7953643798828,
186
+ "loss": 0.6539,
187
+ "neglected": 586.0,
188
+ "rewards/accuracies": 0.6937500238418579,
189
+ "rewards/chosen": -0.0058533623814582825,
190
+ "rewards/margins": 0.04092119634151459,
191
+ "rewards/rejected": -0.04677455872297287,
192
+ "selected": 0.0,
193
+ "step": 110
194
+ },
195
+ {
196
+ "epoch": 0.13,
197
+ "learning_rate": 4.9903757462135984e-06,
198
+ "logps/chosen": -270.7391357421875,
199
+ "logps/rejected": -248.25411987304688,
200
+ "loss": 0.6261,
201
+ "neglected": 746.0,
202
+ "rewards/accuracies": 0.675000011920929,
203
+ "rewards/chosen": -0.039820872247219086,
204
+ "rewards/margins": 0.06930369138717651,
205
+ "rewards/rejected": -0.1091245636343956,
206
+ "selected": 0.0,
207
+ "step": 120
208
+ },
209
+ {
210
+ "epoch": 0.14,
211
+ "learning_rate": 4.980697142834315e-06,
212
+ "logps/chosen": -312.620849609375,
213
+ "logps/rejected": -286.03900146484375,
214
+ "loss": 0.588,
215
+ "neglected": 906.0,
216
+ "rewards/accuracies": 0.6499999761581421,
217
+ "rewards/chosen": -0.1690162718296051,
218
+ "rewards/margins": 0.05675806850194931,
219
+ "rewards/rejected": -0.2257743626832962,
220
+ "selected": 0.0,
221
+ "step": 130
222
+ },
223
+ {
224
+ "epoch": 0.15,
225
+ "learning_rate": 4.967700826904229e-06,
226
+ "logps/chosen": -297.30609130859375,
227
+ "logps/rejected": -288.460693359375,
228
+ "loss": 0.5194,
229
+ "neglected": 1066.0,
230
+ "rewards/accuracies": 0.6312500238418579,
231
+ "rewards/chosen": -0.4745141863822937,
232
+ "rewards/margins": 0.12637066841125488,
233
+ "rewards/rejected": -0.6008848547935486,
234
+ "selected": 0.0,
235
+ "step": 140
236
+ },
237
+ {
238
+ "epoch": 0.16,
239
+ "learning_rate": 4.951404179843963e-06,
240
+ "logps/chosen": -450.36004638671875,
241
+ "logps/rejected": -442.3370056152344,
242
+ "loss": 0.3643,
243
+ "neglected": 1226.0,
244
+ "rewards/accuracies": 0.606249988079071,
245
+ "rewards/chosen": -1.6727848052978516,
246
+ "rewards/margins": 0.14853505790233612,
247
+ "rewards/rejected": -1.821319818496704,
248
+ "selected": 0.0,
249
+ "step": 150
250
+ },
251
+ {
252
+ "epoch": 0.17,
253
+ "learning_rate": 4.931828996974498e-06,
254
+ "logps/chosen": -864.3917846679688,
255
+ "logps/rejected": -883.9728393554688,
256
+ "loss": 0.1924,
257
+ "neglected": 1386.0,
258
+ "rewards/accuracies": 0.48750001192092896,
259
+ "rewards/chosen": -6.123291492462158,
260
+ "rewards/margins": 0.1330125331878662,
261
+ "rewards/rejected": -6.2563042640686035,
262
+ "selected": 0.0,
263
+ "step": 160
264
+ },
265
+ {
266
+ "epoch": 0.18,
267
+ "learning_rate": 4.909001458367867e-06,
268
+ "logps/chosen": -2515.188232421875,
269
+ "logps/rejected": -2068.07666015625,
270
+ "loss": 0.0935,
271
+ "neglected": 1546.0,
272
+ "rewards/accuracies": 0.5,
273
+ "rewards/chosen": -22.28314781188965,
274
+ "rewards/margins": -4.024808406829834,
275
+ "rewards/rejected": -18.25834083557129,
276
+ "selected": 0.0,
277
+ "step": 170
278
+ },
279
+ {
280
+ "epoch": 0.19,
281
+ "learning_rate": 4.882952093833628e-06,
282
+ "logps/chosen": -4048.93603515625,
283
+ "logps/rejected": -3594.38134765625,
284
+ "loss": 0.0695,
285
+ "neglected": 1706.0,
286
+ "rewards/accuracies": 0.42500001192092896,
287
+ "rewards/chosen": -38.03619384765625,
288
+ "rewards/margins": -4.711598873138428,
289
+ "rewards/rejected": -33.32460021972656,
290
+ "selected": 0.0,
291
+ "step": 180
292
+ },
293
+ {
294
+ "epoch": 0.2,
295
+ "learning_rate": 4.853715742087947e-06,
296
+ "logps/chosen": -5398.57421875,
297
+ "logps/rejected": -4498.2265625,
298
+ "loss": 0.0402,
299
+ "neglected": 1866.0,
300
+ "rewards/accuracies": 0.4375,
301
+ "rewards/chosen": -51.07048416137695,
302
+ "rewards/margins": -8.596471786499023,
303
+ "rewards/rejected": -42.47401428222656,
304
+ "selected": 0.0,
305
+ "step": 190
306
+ },
307
+ {
308
+ "epoch": 0.21,
309
+ "learning_rate": 4.821331504159906e-06,
310
+ "logps/chosen": -8471.2490234375,
311
+ "logps/rejected": -7905.1455078125,
312
+ "loss": 0.0392,
313
+ "neglected": 2026.0,
314
+ "rewards/accuracies": 0.4375,
315
+ "rewards/chosen": -82.20130920410156,
316
+ "rewards/margins": -5.739879131317139,
317
+ "rewards/rejected": -76.46143341064453,
318
+ "selected": 0.0,
319
+ "step": 200
320
+ },
321
+ {
322
+ "epoch": 0.21,
323
+ "eval_logps/chosen": -12272.142578125,
324
+ "eval_logps/rejected": -10795.0419921875,
325
+ "eval_loss": 0.027584508061408997,
326
+ "eval_neglected": 256.0,
327
+ "eval_rewards/accuracies": 0.4464285671710968,
328
+ "eval_rewards/chosen": -119.99142456054688,
329
+ "eval_rewards/margins": -14.572598457336426,
330
+ "eval_rewards/rejected": -105.4188232421875,
331
+ "eval_runtime": 670.5193,
332
+ "eval_samples_per_second": 2.983,
333
+ "eval_selected": 0.0,
334
+ "eval_steps_per_second": 0.094,
335
+ "step": 200
336
+ },
337
+ {
338
+ "epoch": 0.22,
339
+ "learning_rate": 4.7858426910973435e-06,
340
+ "logps/chosen": -12588.5732421875,
341
+ "logps/rejected": -11012.65625,
342
+ "loss": 0.0212,
343
+ "neglected": 586.0,
344
+ "rewards/accuracies": 0.44999998807907104,
345
+ "rewards/chosen": -122.9972152709961,
346
+ "rewards/margins": -15.338798522949219,
347
+ "rewards/rejected": -107.6584243774414,
348
+ "selected": 0.0,
349
+ "step": 210
350
+ },
351
+ {
352
+ "epoch": 0.23,
353
+ "learning_rate": 4.747296766042161e-06,
354
+ "logps/chosen": -9843.9521484375,
355
+ "logps/rejected": -9481.474609375,
356
+ "loss": 0.0355,
357
+ "neglected": 746.0,
358
+ "rewards/accuracies": 0.48750001192092896,
359
+ "rewards/chosen": -95.80561065673828,
360
+ "rewards/margins": -3.6035759449005127,
361
+ "rewards/rejected": -92.20204162597656,
362
+ "selected": 0.0,
363
+ "step": 220
364
+ },
365
+ {
366
+ "epoch": 0.24,
367
+ "learning_rate": 4.705745280752586e-06,
368
+ "logps/chosen": -9174.8818359375,
369
+ "logps/rejected": -8592.8603515625,
370
+ "loss": 0.0293,
371
+ "neglected": 906.0,
372
+ "rewards/accuracies": 0.46875,
373
+ "rewards/chosen": -88.80033874511719,
374
+ "rewards/margins": -5.414186000823975,
375
+ "rewards/rejected": -83.38614654541016,
376
+ "selected": 0.0,
377
+ "step": 230
378
+ },
379
+ {
380
+ "epoch": 0.25,
381
+ "learning_rate": 4.661243806657256e-06,
382
+ "logps/chosen": -10734.61328125,
383
+ "logps/rejected": -10024.224609375,
384
+ "loss": 0.0237,
385
+ "neglected": 1066.0,
386
+ "rewards/accuracies": 0.4437499940395355,
387
+ "rewards/chosen": -104.76686096191406,
388
+ "rewards/margins": -6.675856113433838,
389
+ "rewards/rejected": -98.09098815917969,
390
+ "selected": 0.0,
391
+ "step": 240
392
+ },
393
+ {
394
+ "epoch": 0.26,
395
+ "learning_rate": 4.613851860533367e-06,
396
+ "logps/chosen": -12696.328125,
397
+ "logps/rejected": -12253.8046875,
398
+ "loss": 0.0216,
399
+ "neglected": 1226.0,
400
+ "rewards/accuracies": 0.5062500238418579,
401
+ "rewards/chosen": -124.18550872802734,
402
+ "rewards/margins": -4.2357869148254395,
403
+ "rewards/rejected": -119.94972229003906,
404
+ "selected": 0.0,
405
+ "step": 250
406
+ },
407
+ {
408
+ "epoch": 0.27,
409
+ "learning_rate": 4.563632824908252e-06,
410
+ "logps/chosen": -16011.0830078125,
411
+ "logps/rejected": -15822.650390625,
412
+ "loss": 0.0224,
413
+ "neglected": 1386.0,
414
+ "rewards/accuracies": 0.4937500059604645,
415
+ "rewards/chosen": -157.71505737304688,
416
+ "rewards/margins": -1.8953163623809814,
417
+ "rewards/rejected": -155.81973266601562,
418
+ "selected": 0.0,
419
+ "step": 260
420
+ },
421
+ {
422
+ "epoch": 0.28,
423
+ "learning_rate": 4.510653863290871e-06,
424
+ "logps/chosen": -21082.57421875,
425
+ "logps/rejected": -18679.11328125,
426
+ "loss": 0.0214,
427
+ "neglected": 1546.0,
428
+ "rewards/accuracies": 0.48124998807907104,
429
+ "rewards/chosen": -208.0191192626953,
430
+ "rewards/margins": -23.887981414794922,
431
+ "rewards/rejected": -184.1311492919922,
432
+ "selected": 0.0,
433
+ "step": 270
434
+ },
435
+ {
436
+ "epoch": 0.29,
437
+ "learning_rate": 4.454985830346574e-06,
438
+ "logps/chosen": -24180.8984375,
439
+ "logps/rejected": -22673.47265625,
440
+ "loss": 0.0205,
441
+ "neglected": 1706.0,
442
+ "rewards/accuracies": 0.4312500059604645,
443
+ "rewards/chosen": -238.95950317382812,
444
+ "rewards/margins": -14.777926445007324,
445
+ "rewards/rejected": -224.18154907226562,
446
+ "selected": 0.0,
447
+ "step": 280
448
+ },
449
+ {
450
+ "epoch": 0.3,
451
+ "learning_rate": 4.396703177135262e-06,
452
+ "logps/chosen": -25520.37890625,
453
+ "logps/rejected": -20765.65234375,
454
+ "loss": 0.0181,
455
+ "neglected": 1866.0,
456
+ "rewards/accuracies": 0.375,
457
+ "rewards/chosen": -252.4154815673828,
458
+ "rewards/margins": -46.95801544189453,
459
+ "rewards/rejected": -205.4574432373047,
460
+ "selected": 0.0,
461
+ "step": 290
462
+ },
463
+ {
464
+ "epoch": 0.31,
465
+ "learning_rate": 4.335883851539693e-06,
466
+ "logps/chosen": -26487.619140625,
467
+ "logps/rejected": -21971.16796875,
468
+ "loss": 0.0208,
469
+ "neglected": 2026.0,
470
+ "rewards/accuracies": 0.4124999940395355,
471
+ "rewards/chosen": -262.32843017578125,
472
+ "rewards/margins": -44.91776657104492,
473
+ "rewards/rejected": -217.41067504882812,
474
+ "selected": 0.0,
475
+ "step": 300
476
+ },
477
+ {
478
+ "epoch": 0.31,
479
+ "eval_logps/chosen": -28411.646484375,
480
+ "eval_logps/rejected": -24774.666015625,
481
+ "eval_loss": 0.019909363240003586,
482
+ "eval_neglected": 256.0,
483
+ "eval_rewards/accuracies": 0.4444444477558136,
484
+ "eval_rewards/chosen": -281.386474609375,
485
+ "eval_rewards/margins": -36.1713981628418,
486
+ "eval_rewards/rejected": -245.21505737304688,
487
+ "eval_runtime": 505.6466,
488
+ "eval_samples_per_second": 3.955,
489
+ "eval_selected": 0.0,
490
+ "eval_steps_per_second": 0.125,
491
+ "step": 300
492
+ },
493
+ {
494
+ "epoch": 0.32,
495
+ "learning_rate": 4.2726091940171055e-06,
496
+ "logps/chosen": -23575.23046875,
497
+ "logps/rejected": -23384.05078125,
498
+ "loss": 0.0186,
499
+ "neglected": 586.0,
500
+ "rewards/accuracies": 0.5062500238418579,
501
+ "rewards/chosen": -233.19082641601562,
502
+ "rewards/margins": -2.1331238746643066,
503
+ "rewards/rejected": -231.0576934814453,
504
+ "selected": 0.0,
505
+ "step": 310
506
+ },
507
+ {
508
+ "epoch": 0.33,
509
+ "learning_rate": 4.206963828813555e-06,
510
+ "logps/chosen": -25525.6015625,
511
+ "logps/rejected": -22454.76953125,
512
+ "loss": 0.0147,
513
+ "neglected": 746.0,
514
+ "rewards/accuracies": 0.4375,
515
+ "rewards/chosen": -252.456787109375,
516
+ "rewards/margins": -30.628952026367188,
517
+ "rewards/rejected": -221.8278350830078,
518
+ "selected": 0.0,
519
+ "step": 320
520
+ },
521
+ {
522
+ "epoch": 0.35,
523
+ "learning_rate": 4.139035550786495e-06,
524
+ "logps/chosen": -23703.537109375,
525
+ "logps/rejected": -19463.728515625,
526
+ "loss": 0.022,
527
+ "neglected": 906.0,
528
+ "rewards/accuracies": 0.4749999940395355,
529
+ "rewards/chosen": -234.44473266601562,
530
+ "rewards/margins": -42.07261657714844,
531
+ "rewards/rejected": -192.3721160888672,
532
+ "selected": 0.0,
533
+ "step": 330
534
+ },
535
+ {
536
+ "epoch": 0.36,
537
+ "learning_rate": 4.068915207986931e-06,
538
+ "logps/chosen": -25306.37109375,
539
+ "logps/rejected": -21047.587890625,
540
+ "loss": 0.0215,
541
+ "neglected": 1066.0,
542
+ "rewards/accuracies": 0.39375001192092896,
543
+ "rewards/chosen": -250.39120483398438,
544
+ "rewards/margins": -42.19620132446289,
545
+ "rewards/rejected": -208.1950225830078,
546
+ "selected": 0.0,
547
+ "step": 340
548
+ },
549
+ {
550
+ "epoch": 0.37,
551
+ "learning_rate": 3.996696580158211e-06,
552
+ "logps/chosen": -30706.958984375,
553
+ "logps/rejected": -28025.75390625,
554
+ "loss": 0.0174,
555
+ "neglected": 1226.0,
556
+ "rewards/accuracies": 0.4124999940395355,
557
+ "rewards/chosen": -304.2236328125,
558
+ "rewards/margins": -26.65300941467285,
559
+ "rewards/rejected": -277.57061767578125,
560
+ "selected": 0.0,
561
+ "step": 350
562
+ },
563
+ {
564
+ "epoch": 0.38,
565
+ "learning_rate": 3.922476253313921e-06,
566
+ "logps/chosen": -30064.09375,
567
+ "logps/rejected": -27983.521484375,
568
+ "loss": 0.0134,
569
+ "neglected": 1386.0,
570
+ "rewards/accuracies": 0.4749999940395355,
571
+ "rewards/chosen": -298.0165710449219,
572
+ "rewards/margins": -20.747739791870117,
573
+ "rewards/rejected": -277.268798828125,
574
+ "selected": 0.0,
575
+ "step": 360
576
+ },
577
+ {
578
+ "epoch": 0.39,
579
+ "learning_rate": 3.846353490562664e-06,
580
+ "logps/chosen": -31143.537109375,
581
+ "logps/rejected": -27110.509765625,
582
+ "loss": 0.016,
583
+ "neglected": 1546.0,
584
+ "rewards/accuracies": 0.4625000059604645,
585
+ "rewards/chosen": -308.7967529296875,
586
+ "rewards/margins": -40.30244827270508,
587
+ "rewards/rejected": -268.49432373046875,
588
+ "selected": 0.0,
589
+ "step": 370
590
+ },
591
+ {
592
+ "epoch": 0.4,
593
+ "learning_rate": 3.768430099352445e-06,
594
+ "logps/chosen": -32500.40234375,
595
+ "logps/rejected": -29200.681640625,
596
+ "loss": 0.0166,
597
+ "neglected": 1706.0,
598
+ "rewards/accuracies": 0.4375,
599
+ "rewards/chosen": -322.29559326171875,
600
+ "rewards/margins": -32.805747985839844,
601
+ "rewards/rejected": -289.4898376464844,
602
+ "selected": 0.0,
603
+ "step": 380
604
+ },
605
+ {
606
+ "epoch": 0.41,
607
+ "learning_rate": 3.6888102953122307e-06,
608
+ "logps/chosen": -30651.95703125,
609
+ "logps/rejected": -26855.337890625,
610
+ "loss": 0.0167,
611
+ "neglected": 1866.0,
612
+ "rewards/accuracies": 0.4312500059604645,
613
+ "rewards/chosen": -303.84075927734375,
614
+ "rewards/margins": -37.63256072998047,
615
+ "rewards/rejected": -266.208251953125,
616
+ "selected": 0.0,
617
+ "step": 390
618
+ },
619
+ {
620
+ "epoch": 0.42,
621
+ "learning_rate": 3.607600562872785e-06,
622
+ "logps/chosen": -34714.859375,
623
+ "logps/rejected": -29928.291015625,
624
+ "loss": 0.0157,
625
+ "neglected": 2026.0,
626
+ "rewards/accuracies": 0.41874998807907104,
627
+ "rewards/chosen": -344.1798400878906,
628
+ "rewards/margins": -47.40506362915039,
629
+ "rewards/rejected": -296.77471923828125,
630
+ "selected": 0.0,
631
+ "step": 400
632
+ },
633
+ {
634
+ "epoch": 0.42,
635
+ "eval_logps/chosen": -35648.6171875,
636
+ "eval_logps/rejected": -30971.783203125,
637
+ "eval_loss": 0.01606718823313713,
638
+ "eval_neglected": 256.0,
639
+ "eval_rewards/accuracies": 0.4563491940498352,
640
+ "eval_rewards/chosen": -353.75616455078125,
641
+ "eval_rewards/margins": -46.569923400878906,
642
+ "eval_rewards/rejected": -307.1862487792969,
643
+ "eval_runtime": 506.2643,
644
+ "eval_samples_per_second": 3.951,
645
+ "eval_selected": 0.0,
646
+ "eval_steps_per_second": 0.124,
647
+ "step": 400
648
+ },
649
+ {
650
+ "epoch": 0.43,
651
+ "learning_rate": 3.5249095128531863e-06,
652
+ "logps/chosen": -35328.84375,
653
+ "logps/rejected": -30211.0,
654
+ "loss": 0.0123,
655
+ "neglected": 586.0,
656
+ "rewards/accuracies": 0.4437499940395355,
657
+ "rewards/chosen": -350.4105529785156,
658
+ "rewards/margins": -50.80678176879883,
659
+ "rewards/rejected": -299.60382080078125,
660
+ "selected": 0.0,
661
+ "step": 410
662
+ },
663
+ {
664
+ "epoch": 0.44,
665
+ "learning_rate": 3.4408477372034743e-06,
666
+ "logps/chosen": -28428.931640625,
667
+ "logps/rejected": -24432.90234375,
668
+ "loss": 0.0282,
669
+ "neglected": 746.0,
670
+ "rewards/accuracies": 0.4437499940395355,
671
+ "rewards/chosen": -281.9709777832031,
672
+ "rewards/margins": -40.07168960571289,
673
+ "rewards/rejected": -241.89932250976562,
674
+ "selected": 0.0,
675
+ "step": 420
676
+ },
677
+ {
678
+ "epoch": 0.45,
679
+ "learning_rate": 3.355527661097728e-06,
680
+ "logps/chosen": -32917.82421875,
681
+ "logps/rejected": -31499.80078125,
682
+ "loss": 0.0179,
683
+ "neglected": 906.0,
684
+ "rewards/accuracies": 0.4749999940395355,
685
+ "rewards/chosen": -326.5057678222656,
686
+ "rewards/margins": -14.0189790725708,
687
+ "rewards/rejected": -312.4867858886719,
688
+ "selected": 0.0,
689
+ "step": 430
690
+ },
691
+ {
692
+ "epoch": 0.46,
693
+ "learning_rate": 3.269063392575352e-06,
694
+ "logps/chosen": -32941.2421875,
695
+ "logps/rejected": -28788.244140625,
696
+ "loss": 0.018,
697
+ "neglected": 1066.0,
698
+ "rewards/accuracies": 0.45625001192092896,
699
+ "rewards/chosen": -326.6683349609375,
700
+ "rewards/margins": -41.23744583129883,
701
+ "rewards/rejected": -285.4308776855469,
702
+ "selected": 0.0,
703
+ "step": 440
704
+ },
705
+ {
706
+ "epoch": 0.47,
707
+ "learning_rate": 3.181570569931697e-06,
708
+ "logps/chosen": -28051.92578125,
709
+ "logps/rejected": -26678.359375,
710
+ "loss": 0.025,
711
+ "neglected": 1226.0,
712
+ "rewards/accuracies": 0.41874998807907104,
713
+ "rewards/chosen": -278.347900390625,
714
+ "rewards/margins": -13.726778984069824,
715
+ "rewards/rejected": -264.62115478515625,
716
+ "selected": 0.0,
717
+ "step": 450
718
+ },
719
+ {
720
+ "epoch": 0.48,
721
+ "learning_rate": 3.09316620706208e-06,
722
+ "logps/chosen": -38179.28515625,
723
+ "logps/rejected": -31792.712890625,
724
+ "loss": 0.0135,
725
+ "neglected": 1386.0,
726
+ "rewards/accuracies": 0.4124999940395355,
727
+ "rewards/chosen": -378.8819885253906,
728
+ "rewards/margins": -63.55814743041992,
729
+ "rewards/rejected": -315.3238220214844,
730
+ "selected": 0.0,
731
+ "step": 460
732
+ },
733
+ {
734
+ "epoch": 0.49,
735
+ "learning_rate": 3.0039685369660785e-06,
736
+ "logps/chosen": -34295.48046875,
737
+ "logps/rejected": -30349.291015625,
738
+ "loss": 0.0128,
739
+ "neglected": 1546.0,
740
+ "rewards/accuracies": 0.48124998807907104,
741
+ "rewards/chosen": -340.40753173828125,
742
+ "rewards/margins": -39.258506774902344,
743
+ "rewards/rejected": -301.1490173339844,
744
+ "selected": 0.0,
745
+ "step": 470
746
+ },
747
+ {
748
+ "epoch": 0.5,
749
+ "learning_rate": 2.91409685362137e-06,
750
+ "logps/chosen": -30752.1875,
751
+ "logps/rejected": -30045.68359375,
752
+ "loss": 0.0118,
753
+ "neglected": 1706.0,
754
+ "rewards/accuracies": 0.45625001192092896,
755
+ "rewards/chosen": -305.1645202636719,
756
+ "rewards/margins": -7.10043478012085,
757
+ "rewards/rejected": -298.06414794921875,
758
+ "selected": 0.0,
759
+ "step": 480
760
+ },
761
+ {
762
+ "epoch": 0.51,
763
+ "learning_rate": 2.8236713524386085e-06,
764
+ "logps/chosen": -32674.708984375,
765
+ "logps/rejected": -27154.64453125,
766
+ "loss": 0.0173,
767
+ "neglected": 1866.0,
768
+ "rewards/accuracies": 0.42500001192092896,
769
+ "rewards/chosen": -324.16522216796875,
770
+ "rewards/margins": -54.92888641357422,
771
+ "rewards/rejected": -269.236328125,
772
+ "selected": 0.0,
773
+ "step": 490
774
+ },
775
+ {
776
+ "epoch": 0.52,
777
+ "learning_rate": 2.7328129695107205e-06,
778
+ "logps/chosen": -33942.0390625,
779
+ "logps/rejected": -26798.068359375,
780
+ "loss": 0.0182,
781
+ "neglected": 2026.0,
782
+ "rewards/accuracies": 0.39375001192092896,
783
+ "rewards/chosen": -336.67034912109375,
784
+ "rewards/margins": -71.16191864013672,
785
+ "rewards/rejected": -265.5083923339844,
786
+ "selected": 0.0,
787
+ "step": 500
788
+ },
789
+ {
790
+ "epoch": 0.52,
791
+ "eval_logps/chosen": -33432.5625,
792
+ "eval_logps/rejected": -29219.611328125,
793
+ "eval_loss": 0.014793259091675282,
794
+ "eval_neglected": 256.0,
795
+ "eval_rewards/accuracies": 0.4464285671710968,
796
+ "eval_rewards/chosen": -331.59564208984375,
797
+ "eval_rewards/margins": -41.931129455566406,
798
+ "eval_rewards/rejected": -289.6645202636719,
799
+ "eval_runtime": 505.8824,
800
+ "eval_samples_per_second": 3.953,
801
+ "eval_selected": 0.0,
802
+ "eval_steps_per_second": 0.125,
803
+ "step": 500
804
+ },
805
+ {
806
+ "epoch": 0.53,
807
+ "learning_rate": 2.641643219871597e-06,
808
+ "logps/chosen": -31474.45703125,
809
+ "logps/rejected": -25789.703125,
810
+ "loss": 0.0149,
811
+ "neglected": 586.0,
812
+ "rewards/accuracies": 0.46875,
813
+ "rewards/chosen": -312.119384765625,
814
+ "rewards/margins": -56.47794723510742,
815
+ "rewards/rejected": -255.6414337158203,
816
+ "selected": 0.0,
817
+ "step": 510
818
+ },
819
+ {
820
+ "epoch": 0.54,
821
+ "learning_rate": 2.5502840349805074e-06,
822
+ "logps/chosen": -32054.20703125,
823
+ "logps/rejected": -27586.98046875,
824
+ "loss": 0.0173,
825
+ "neglected": 746.0,
826
+ "rewards/accuracies": 0.4437499940395355,
827
+ "rewards/chosen": -317.7613830566406,
828
+ "rewards/margins": -44.480018615722656,
829
+ "rewards/rejected": -273.2813720703125,
830
+ "selected": 0.0,
831
+ "step": 520
832
+ },
833
+ {
834
+ "epoch": 0.55,
835
+ "learning_rate": 2.4588575996495797e-06,
836
+ "logps/chosen": -34159.5703125,
837
+ "logps/rejected": -30448.79296875,
838
+ "loss": 0.0138,
839
+ "neglected": 906.0,
840
+ "rewards/accuracies": 0.40625,
841
+ "rewards/chosen": -338.7040710449219,
842
+ "rewards/margins": -37.0126953125,
843
+ "rewards/rejected": -301.69134521484375,
844
+ "selected": 0.0,
845
+ "step": 530
846
+ },
847
+ {
848
+ "epoch": 0.57,
849
+ "learning_rate": 2.367486188632446e-06,
850
+ "logps/chosen": -33792.76171875,
851
+ "logps/rejected": -29700.40625,
852
+ "loss": 0.0078,
853
+ "neglected": 1066.0,
854
+ "rewards/accuracies": 0.40625,
855
+ "rewards/chosen": -335.1407775878906,
856
+ "rewards/margins": -40.700645446777344,
857
+ "rewards/rejected": -294.440185546875,
858
+ "selected": 0.0,
859
+ "step": 540
860
+ },
861
+ {
862
+ "epoch": 0.58,
863
+ "learning_rate": 2.276292003092593e-06,
864
+ "logps/chosen": -33080.16796875,
865
+ "logps/rejected": -29123.927734375,
866
+ "loss": 0.0145,
867
+ "neglected": 1226.0,
868
+ "rewards/accuracies": 0.4312500059604645,
869
+ "rewards/chosen": -328.01007080078125,
870
+ "rewards/margins": -39.278717041015625,
871
+ "rewards/rejected": -288.7312927246094,
872
+ "selected": 0.0,
873
+ "step": 550
874
+ },
875
+ {
876
+ "epoch": 0.59,
877
+ "learning_rate": 2.1853970071701415e-06,
878
+ "logps/chosen": -29881.09375,
879
+ "logps/rejected": -24806.671875,
880
+ "loss": 0.0136,
881
+ "neglected": 1386.0,
882
+ "rewards/accuracies": 0.4437499940395355,
883
+ "rewards/chosen": -296.17803955078125,
884
+ "rewards/margins": -50.19990539550781,
885
+ "rewards/rejected": -245.9781494140625,
886
+ "selected": 0.0,
887
+ "step": 560
888
+ },
889
+ {
890
+ "epoch": 0.6,
891
+ "learning_rate": 2.0949227648656194e-06,
892
+ "logps/chosen": -34481.3203125,
893
+ "logps/rejected": -30493.02734375,
894
+ "loss": 0.0172,
895
+ "neglected": 1546.0,
896
+ "rewards/accuracies": 0.4312500059604645,
897
+ "rewards/chosen": -342.2419738769531,
898
+ "rewards/margins": -39.841697692871094,
899
+ "rewards/rejected": -302.4002685546875,
900
+ "selected": 0.0,
901
+ "step": 570
902
+ },
903
+ {
904
+ "epoch": 0.61,
905
+ "learning_rate": 2.00499027745888e-06,
906
+ "logps/chosen": -34967.98828125,
907
+ "logps/rejected": -29546.712890625,
908
+ "loss": 0.0141,
909
+ "neglected": 1706.0,
910
+ "rewards/accuracies": 0.4000000059604645,
911
+ "rewards/chosen": -347.0220031738281,
912
+ "rewards/margins": -54.10175323486328,
913
+ "rewards/rejected": -292.9202575683594,
914
+ "selected": 0.0,
915
+ "step": 580
916
+ },
917
+ {
918
+ "epoch": 0.62,
919
+ "learning_rate": 1.915719821680624e-06,
920
+ "logps/chosen": -32696.44921875,
921
+ "logps/rejected": -30430.255859375,
922
+ "loss": 0.0167,
923
+ "neglected": 1866.0,
924
+ "rewards/accuracies": 0.45625001192092896,
925
+ "rewards/chosen": -324.4429626464844,
926
+ "rewards/margins": -22.577922821044922,
927
+ "rewards/rejected": -301.8650207519531,
928
+ "selected": 0.0,
929
+ "step": 590
930
+ },
931
+ {
932
+ "epoch": 0.63,
933
+ "learning_rate": 1.8272307888529276e-06,
934
+ "logps/chosen": -36048.4453125,
935
+ "logps/rejected": -33494.70703125,
936
+ "loss": 0.013,
937
+ "neglected": 2026.0,
938
+ "rewards/accuracies": 0.4749999940395355,
939
+ "rewards/chosen": -357.53070068359375,
940
+ "rewards/margins": -25.526748657226562,
941
+ "rewards/rejected": -332.0039978027344,
942
+ "selected": 0.0,
943
+ "step": 600
944
+ },
945
+ {
946
+ "epoch": 0.63,
947
+ "eval_logps/chosen": -35941.4140625,
948
+ "eval_logps/rejected": -31495.03125,
949
+ "eval_loss": 0.014261237345635891,
950
+ "eval_neglected": 256.0,
951
+ "eval_rewards/accuracies": 0.454365074634552,
952
+ "eval_rewards/chosen": -356.68414306640625,
953
+ "eval_rewards/margins": -44.265377044677734,
954
+ "eval_rewards/rejected": -312.41876220703125,
955
+ "eval_runtime": 505.9708,
956
+ "eval_samples_per_second": 3.953,
957
+ "eval_selected": 0.0,
958
+ "eval_steps_per_second": 0.125,
959
+ "step": 600
960
+ },
961
+ {
962
+ "epoch": 0.64,
963
+ "learning_rate": 1.739641525213929e-06,
964
+ "logps/chosen": -36224.6953125,
965
+ "logps/rejected": -28970.59765625,
966
+ "loss": 0.0178,
967
+ "neglected": 586.0,
968
+ "rewards/accuracies": 0.4124999940395355,
969
+ "rewards/chosen": -359.3634948730469,
970
+ "rewards/margins": -71.96965789794922,
971
+ "rewards/rejected": -287.39385986328125,
972
+ "selected": 0.0,
973
+ "step": 610
974
+ },
975
+ {
976
+ "epoch": 0.65,
977
+ "learning_rate": 1.6530691736402317e-06,
978
+ "logps/chosen": -33294.37109375,
979
+ "logps/rejected": -29921.0,
980
+ "loss": 0.0107,
981
+ "neglected": 746.0,
982
+ "rewards/accuracies": 0.45625001192092896,
983
+ "rewards/chosen": -330.2255859375,
984
+ "rewards/margins": -33.42811965942383,
985
+ "rewards/rejected": -296.7974548339844,
986
+ "selected": 0.0,
987
+ "step": 620
988
+ },
989
+ {
990
+ "epoch": 0.66,
991
+ "learning_rate": 1.5676295169786864e-06,
992
+ "logps/chosen": -33129.5703125,
993
+ "logps/rejected": -29490.744140625,
994
+ "loss": 0.0122,
995
+ "neglected": 906.0,
996
+ "rewards/accuracies": 0.4437499940395355,
997
+ "rewards/chosen": -328.56658935546875,
998
+ "rewards/margins": -36.11338806152344,
999
+ "rewards/rejected": -292.45318603515625,
1000
+ "selected": 0.0,
1001
+ "step": 630
1002
+ },
1003
+ {
1004
+ "epoch": 0.67,
1005
+ "learning_rate": 1.4834368231970922e-06,
1006
+ "logps/chosen": -31400.677734375,
1007
+ "logps/rejected": -29454.287109375,
1008
+ "loss": 0.0158,
1009
+ "neglected": 1066.0,
1010
+ "rewards/accuracies": 0.46875,
1011
+ "rewards/chosen": -311.5920104980469,
1012
+ "rewards/margins": -19.44215202331543,
1013
+ "rewards/rejected": -292.14984130859375,
1014
+ "selected": 0.0,
1015
+ "step": 640
1016
+ },
1017
+ {
1018
+ "epoch": 0.68,
1019
+ "learning_rate": 1.4006036925609245e-06,
1020
+ "logps/chosen": -37346.015625,
1021
+ "logps/rejected": -32676.51171875,
1022
+ "loss": 0.0115,
1023
+ "neglected": 1226.0,
1024
+ "rewards/accuracies": 0.4124999940395355,
1025
+ "rewards/chosen": -370.58929443359375,
1026
+ "rewards/margins": -46.472129821777344,
1027
+ "rewards/rejected": -324.11724853515625,
1028
+ "selected": 0.0,
1029
+ "step": 650
1030
+ },
1031
+ {
1032
+ "epoch": 0.69,
1033
+ "learning_rate": 1.3192409070404582e-06,
1034
+ "logps/chosen": -35501.5234375,
1035
+ "logps/rejected": -31327.859375,
1036
+ "loss": 0.0195,
1037
+ "neglected": 1386.0,
1038
+ "rewards/accuracies": 0.4312500059604645,
1039
+ "rewards/chosen": -352.1575622558594,
1040
+ "rewards/margins": -41.29918670654297,
1041
+ "rewards/rejected": -310.8583984375,
1042
+ "selected": 0.0,
1043
+ "step": 660
1044
+ },
1045
+ {
1046
+ "epoch": 0.7,
1047
+ "learning_rate": 1.2394572821496953e-06,
1048
+ "logps/chosen": -34126.65234375,
1049
+ "logps/rejected": -28228.837890625,
1050
+ "loss": 0.0213,
1051
+ "neglected": 1546.0,
1052
+ "rewards/accuracies": 0.4375,
1053
+ "rewards/chosen": -338.5702209472656,
1054
+ "rewards/margins": -58.62493896484375,
1055
+ "rewards/rejected": -279.9453125,
1056
+ "selected": 0.0,
1057
+ "step": 670
1058
+ },
1059
+ {
1060
+ "epoch": 0.71,
1061
+ "learning_rate": 1.1613595214152713e-06,
1062
+ "logps/chosen": -33340.77734375,
1063
+ "logps/rejected": -30065.52734375,
1064
+ "loss": 0.0155,
1065
+ "neglected": 1706.0,
1066
+ "rewards/accuracies": 0.45625001192092896,
1067
+ "rewards/chosen": -330.6611633300781,
1068
+ "rewards/margins": -32.5056037902832,
1069
+ "rewards/rejected": -298.15557861328125,
1070
+ "selected": 0.0,
1071
+ "step": 680
1072
+ },
1073
+ {
1074
+ "epoch": 0.72,
1075
+ "learning_rate": 1.0850520736699362e-06,
1076
+ "logps/chosen": -30460.240234375,
1077
+ "logps/rejected": -24525.88671875,
1078
+ "loss": 0.0166,
1079
+ "neglected": 1866.0,
1080
+ "rewards/accuracies": 0.38749998807907104,
1081
+ "rewards/chosen": -302.1998291015625,
1082
+ "rewards/margins": -58.95698165893555,
1083
+ "rewards/rejected": -243.2428436279297,
1084
+ "selected": 0.0,
1085
+ "step": 690
1086
+ },
1087
+ {
1088
+ "epoch": 0.73,
1089
+ "learning_rate": 1.0106369933615043e-06,
1090
+ "logps/chosen": -34674.8515625,
1091
+ "logps/rejected": -27218.890625,
1092
+ "loss": 0.0165,
1093
+ "neglected": 2026.0,
1094
+ "rewards/accuracies": 0.40625,
1095
+ "rewards/chosen": -344.1193542480469,
1096
+ "rewards/margins": -74.25706481933594,
1097
+ "rewards/rejected": -269.8622741699219,
1098
+ "selected": 0.0,
1099
+ "step": 700
1100
+ },
1101
+ {
1102
+ "epoch": 0.73,
1103
+ "eval_logps/chosen": -35642.40234375,
1104
+ "eval_logps/rejected": -31306.609375,
1105
+ "eval_loss": 0.014253102242946625,
1106
+ "eval_neglected": 256.0,
1107
+ "eval_rewards/accuracies": 0.4503968358039856,
1108
+ "eval_rewards/chosen": -353.6939697265625,
1109
+ "eval_rewards/margins": -43.15950012207031,
1110
+ "eval_rewards/rejected": -310.53448486328125,
1111
+ "eval_runtime": 505.4591,
1112
+ "eval_samples_per_second": 3.957,
1113
+ "eval_selected": 0.0,
1114
+ "eval_steps_per_second": 0.125,
1115
+ "step": 700
1116
+ },
1117
+ {
1118
+ "epoch": 0.74,
1119
+ "learning_rate": 9.382138040640714e-07,
1120
+ "logps/chosen": -35340.9296875,
1121
+ "logps/rejected": -29694.728515625,
1122
+ "loss": 0.0173,
1123
+ "neglected": 586.0,
1124
+ "rewards/accuracies": 0.41874998807907104,
1125
+ "rewards/chosen": -350.610595703125,
1126
+ "rewards/margins": -55.991127014160156,
1127
+ "rewards/rejected": -294.61944580078125,
1128
+ "selected": 0.0,
1129
+ "step": 710
1130
+ },
1131
+ {
1132
+ "epoch": 0.75,
1133
+ "learning_rate": 8.678793653740633e-07,
1134
+ "logps/chosen": -33239.89453125,
1135
+ "logps/rejected": -29252.65234375,
1136
+ "loss": 0.0194,
1137
+ "neglected": 746.0,
1138
+ "rewards/accuracies": 0.41874998807907104,
1139
+ "rewards/chosen": -329.77923583984375,
1140
+ "rewards/margins": -39.71256637573242,
1141
+ "rewards/rejected": -290.0666809082031,
1142
+ "selected": 0.0,
1143
+ "step": 720
1144
+ },
1145
+ {
1146
+ "epoch": 0.76,
1147
+ "learning_rate": 7.997277433690984e-07,
1148
+ "logps/chosen": -33704.45703125,
1149
+ "logps/rejected": -27931.458984375,
1150
+ "loss": 0.0181,
1151
+ "neglected": 906.0,
1152
+ "rewards/accuracies": 0.36250001192092896,
1153
+ "rewards/chosen": -334.1654052734375,
1154
+ "rewards/margins": -57.19194793701172,
1155
+ "rewards/rejected": -276.97344970703125,
1156
+ "selected": 0.0,
1157
+ "step": 730
1158
+ },
1159
+ {
1160
+ "epoch": 0.77,
1161
+ "learning_rate": 7.338500848029603e-07,
1162
+ "logps/chosen": -38638.05078125,
1163
+ "logps/rejected": -33758.7890625,
1164
+ "loss": 0.0091,
1165
+ "neglected": 1066.0,
1166
+ "rewards/accuracies": 0.4625000059604645,
1167
+ "rewards/chosen": -383.42559814453125,
1168
+ "rewards/margins": -48.62553024291992,
1169
+ "rewards/rejected": -334.800048828125,
1170
+ "selected": 0.0,
1171
+ "step": 740
1172
+ },
1173
+ {
1174
+ "epoch": 0.79,
1175
+ "learning_rate": 6.70334495204884e-07,
1176
+ "logps/chosen": -35638.7421875,
1177
+ "logps/rejected": -31301.58984375,
1178
+ "loss": 0.0156,
1179
+ "neglected": 1226.0,
1180
+ "rewards/accuracies": 0.39375001192092896,
1181
+ "rewards/chosen": -353.8833312988281,
1182
+ "rewards/margins": -43.371212005615234,
1183
+ "rewards/rejected": -310.51214599609375,
1184
+ "selected": 0.0,
1185
+ "step": 750
1186
+ },
1187
+ {
1188
+ "epoch": 0.8,
1189
+ "learning_rate": 6.092659210462232e-07,
1190
+ "logps/chosen": -38116.8828125,
1191
+ "logps/rejected": -33497.234375,
1192
+ "loss": 0.012,
1193
+ "neglected": 1386.0,
1194
+ "rewards/accuracies": 0.4437499940395355,
1195
+ "rewards/chosen": -378.3582763671875,
1196
+ "rewards/margins": -45.90812301635742,
1197
+ "rewards/rejected": -332.45013427734375,
1198
+ "selected": 0.0,
1199
+ "step": 760
1200
+ },
1201
+ {
1202
+ "epoch": 0.81,
1203
+ "learning_rate": 5.507260361320738e-07,
1204
+ "logps/chosen": -36556.7890625,
1205
+ "logps/rejected": -33627.23828125,
1206
+ "loss": 0.0106,
1207
+ "neglected": 1546.0,
1208
+ "rewards/accuracies": 0.4312500059604645,
1209
+ "rewards/chosen": -362.6523742675781,
1210
+ "rewards/margins": -29.13754653930664,
1211
+ "rewards/rejected": -333.51483154296875,
1212
+ "selected": 0.0,
1213
+ "step": 770
1214
+ },
1215
+ {
1216
+ "epoch": 0.82,
1217
+ "learning_rate": 4.947931323697983e-07,
1218
+ "logps/chosen": -33210.11328125,
1219
+ "logps/rejected": -29063.90234375,
1220
+ "loss": 0.013,
1221
+ "neglected": 1706.0,
1222
+ "rewards/accuracies": 0.41874998807907104,
1223
+ "rewards/chosen": -329.4010925292969,
1224
+ "rewards/margins": -41.132415771484375,
1225
+ "rewards/rejected": -288.2686462402344,
1226
+ "selected": 0.0,
1227
+ "step": 780
1228
+ },
1229
+ {
1230
+ "epoch": 0.83,
1231
+ "learning_rate": 4.4154201506053985e-07,
1232
+ "logps/chosen": -35076.3515625,
1233
+ "logps/rejected": -33320.109375,
1234
+ "loss": 0.0146,
1235
+ "neglected": 1866.0,
1236
+ "rewards/accuracies": 0.45625001192092896,
1237
+ "rewards/chosen": -348.1772155761719,
1238
+ "rewards/margins": -17.483577728271484,
1239
+ "rewards/rejected": -330.693603515625,
1240
+ "selected": 0.0,
1241
+ "step": 790
1242
+ },
1243
+ {
1244
+ "epoch": 0.84,
1245
+ "learning_rate": 3.910439028537638e-07,
1246
+ "logps/chosen": -37705.09375,
1247
+ "logps/rejected": -33144.11328125,
1248
+ "loss": 0.0145,
1249
+ "neglected": 2026.0,
1250
+ "rewards/accuracies": 0.42500001192092896,
1251
+ "rewards/chosen": -374.12744140625,
1252
+ "rewards/margins": -45.31665802001953,
1253
+ "rewards/rejected": -328.810791015625,
1254
+ "selected": 0.0,
1255
+ "step": 800
1256
+ },
1257
+ {
1258
+ "epoch": 0.84,
1259
+ "eval_logps/chosen": -37680.9765625,
1260
+ "eval_logps/rejected": -33080.87890625,
1261
+ "eval_loss": 0.013520442880690098,
1262
+ "eval_neglected": 256.0,
1263
+ "eval_rewards/accuracies": 0.454365074634552,
1264
+ "eval_rewards/chosen": -374.0797424316406,
1265
+ "eval_rewards/margins": -45.80255889892578,
1266
+ "eval_rewards/rejected": -328.27716064453125,
1267
+ "eval_runtime": 506.1329,
1268
+ "eval_samples_per_second": 3.952,
1269
+ "eval_selected": 0.0,
1270
+ "eval_steps_per_second": 0.124,
1271
+ "step": 800
1272
+ },
1273
+ {
1274
+ "epoch": 0.85,
1275
+ "learning_rate": 3.4336633249862084e-07,
1276
+ "logps/chosen": -34459.21484375,
1277
+ "logps/rejected": -27397.837890625,
1278
+ "loss": 0.0064,
1279
+ "neglected": 586.0,
1280
+ "rewards/accuracies": 0.375,
1281
+ "rewards/chosen": -341.8682556152344,
1282
+ "rewards/margins": -70.08294677734375,
1283
+ "rewards/rejected": -271.78533935546875,
1284
+ "selected": 0.0,
1285
+ "step": 810
1286
+ },
1287
+ {
1288
+ "epoch": 0.86,
1289
+ "learning_rate": 2.98573068519539e-07,
1290
+ "logps/chosen": -37666.25390625,
1291
+ "logps/rejected": -29870.04296875,
1292
+ "loss": 0.0225,
1293
+ "neglected": 746.0,
1294
+ "rewards/accuracies": 0.4124999940395355,
1295
+ "rewards/chosen": -373.8772888183594,
1296
+ "rewards/margins": -77.50922393798828,
1297
+ "rewards/rejected": -296.3680725097656,
1298
+ "selected": 0.0,
1299
+ "step": 820
1300
+ },
1301
+ {
1302
+ "epoch": 0.87,
1303
+ "learning_rate": 2.5672401793681854e-07,
1304
+ "logps/chosen": -35177.1796875,
1305
+ "logps/rejected": -35886.7734375,
1306
+ "loss": 0.0127,
1307
+ "neglected": 906.0,
1308
+ "rewards/accuracies": 0.550000011920929,
1309
+ "rewards/chosen": -349.24237060546875,
1310
+ "rewards/margins": 6.9106035232543945,
1311
+ "rewards/rejected": -356.1529235839844,
1312
+ "selected": 0.0,
1313
+ "step": 830
1314
+ },
1315
+ {
1316
+ "epoch": 0.88,
1317
+ "learning_rate": 2.178751501463036e-07,
1318
+ "logps/chosen": -35737.97265625,
1319
+ "logps/rejected": -33345.86328125,
1320
+ "loss": 0.0107,
1321
+ "neglected": 1066.0,
1322
+ "rewards/accuracies": 0.45625001192092896,
1323
+ "rewards/chosen": -354.8319396972656,
1324
+ "rewards/margins": -23.900835037231445,
1325
+ "rewards/rejected": -330.93109130859375,
1326
+ "selected": 0.0,
1327
+ "step": 840
1328
+ },
1329
+ {
1330
+ "epoch": 0.89,
1331
+ "learning_rate": 1.820784220652766e-07,
1332
+ "logps/chosen": -36996.6171875,
1333
+ "logps/rejected": -31543.29296875,
1334
+ "loss": 0.0169,
1335
+ "neglected": 1226.0,
1336
+ "rewards/accuracies": 0.40625,
1337
+ "rewards/chosen": -367.1126708984375,
1338
+ "rewards/margins": -54.04352951049805,
1339
+ "rewards/rejected": -313.0691833496094,
1340
+ "selected": 0.0,
1341
+ "step": 850
1342
+ },
1343
+ {
1344
+ "epoch": 0.9,
1345
+ "learning_rate": 1.4938170864468636e-07,
1346
+ "logps/chosen": -35994.3359375,
1347
+ "logps/rejected": -32652.32421875,
1348
+ "loss": 0.0158,
1349
+ "neglected": 1386.0,
1350
+ "rewards/accuracies": 0.4312500059604645,
1351
+ "rewards/chosen": -357.20001220703125,
1352
+ "rewards/margins": -33.3302001953125,
1353
+ "rewards/rejected": -323.8697814941406,
1354
+ "selected": 0.0,
1355
+ "step": 860
1356
+ },
1357
+ {
1358
+ "epoch": 0.91,
1359
+ "learning_rate": 1.1982873884064466e-07,
1360
+ "logps/chosen": -31830.9375,
1361
+ "logps/rejected": -30581.083984375,
1362
+ "loss": 0.0158,
1363
+ "neglected": 1546.0,
1364
+ "rewards/accuracies": 0.543749988079071,
1365
+ "rewards/chosen": -315.9645690917969,
1366
+ "rewards/margins": -12.488165855407715,
1367
+ "rewards/rejected": -303.4764404296875,
1368
+ "selected": 0.0,
1369
+ "step": 870
1370
+ },
1371
+ {
1372
+ "epoch": 0.92,
1373
+ "learning_rate": 9.345903713082305e-08,
1374
+ "logps/chosen": -37537.7265625,
1375
+ "logps/rejected": -32952.95703125,
1376
+ "loss": 0.0108,
1377
+ "neglected": 1706.0,
1378
+ "rewards/accuracies": 0.48124998807907104,
1379
+ "rewards/chosen": -372.6186218261719,
1380
+ "rewards/margins": -45.61473846435547,
1381
+ "rewards/rejected": -327.00390625,
1382
+ "selected": 0.0,
1383
+ "step": 880
1384
+ },
1385
+ {
1386
+ "epoch": 0.93,
1387
+ "learning_rate": 7.030787065396866e-08,
1388
+ "logps/chosen": -39424.8828125,
1389
+ "logps/rejected": -34691.7265625,
1390
+ "loss": 0.0135,
1391
+ "neglected": 1866.0,
1392
+ "rewards/accuracies": 0.4312500059604645,
1393
+ "rewards/chosen": -391.5462341308594,
1394
+ "rewards/margins": -47.24760437011719,
1395
+ "rewards/rejected": -344.29864501953125,
1396
+ "selected": 0.0,
1397
+ "step": 890
1398
+ },
1399
+ {
1400
+ "epoch": 0.94,
1401
+ "learning_rate": 5.0406202043228604e-08,
1402
+ "logps/chosen": -36448.4765625,
1403
+ "logps/rejected": -30919.54296875,
1404
+ "loss": 0.0195,
1405
+ "neglected": 2026.0,
1406
+ "rewards/accuracies": 0.44999998807907104,
1407
+ "rewards/chosen": -361.8363037109375,
1408
+ "rewards/margins": -55.14497756958008,
1409
+ "rewards/rejected": -306.69134521484375,
1410
+ "selected": 0.0,
1411
+ "step": 900
1412
+ },
1413
+ {
1414
+ "epoch": 0.94,
1415
+ "eval_logps/chosen": -37924.83984375,
1416
+ "eval_logps/rejected": -33293.47265625,
1417
+ "eval_loss": 0.013714035972952843,
1418
+ "eval_neglected": 256.0,
1419
+ "eval_rewards/accuracies": 0.454365074634552,
1420
+ "eval_rewards/chosen": -376.51837158203125,
1421
+ "eval_rewards/margins": -46.11524200439453,
1422
+ "eval_rewards/rejected": -330.4031677246094,
1423
+ "eval_runtime": 967.1825,
1424
+ "eval_samples_per_second": 2.068,
1425
+ "eval_selected": 0.0,
1426
+ "eval_steps_per_second": 0.065,
1427
+ "step": 900
1428
+ },
1429
+ {
1430
+ "epoch": 0.95,
1431
+ "learning_rate": 3.378064801637687e-08,
1432
+ "logps/chosen": -37312.3515625,
1433
+ "logps/rejected": -31717.978515625,
1434
+ "loss": 0.0123,
1435
+ "neglected": 586.0,
1436
+ "rewards/accuracies": 0.4375,
1437
+ "rewards/chosen": -370.2989196777344,
1438
+ "rewards/margins": -55.649620056152344,
1439
+ "rewards/rejected": -314.6493225097656,
1440
+ "selected": 0.0,
1441
+ "step": 910
1442
+ },
1443
+ {
1444
+ "epoch": 0.96,
1445
+ "learning_rate": 2.0453443778310766e-08,
1446
+ "logps/chosen": -38356.51953125,
1447
+ "logps/rejected": -32435.177734375,
1448
+ "loss": 0.0172,
1449
+ "neglected": 746.0,
1450
+ "rewards/accuracies": 0.36250001192092896,
1451
+ "rewards/chosen": -380.7662048339844,
1452
+ "rewards/margins": -58.904090881347656,
1453
+ "rewards/rejected": -321.86212158203125,
1454
+ "selected": 0.0,
1455
+ "step": 920
1456
+ },
1457
+ {
1458
+ "epoch": 0.97,
1459
+ "learning_rate": 1.0442413283435759e-08,
1460
+ "logps/chosen": -37030.9140625,
1461
+ "logps/rejected": -29036.62109375,
1462
+ "loss": 0.0174,
1463
+ "neglected": 906.0,
1464
+ "rewards/accuracies": 0.45625001192092896,
1465
+ "rewards/chosen": -367.7157897949219,
1466
+ "rewards/margins": -79.69428253173828,
1467
+ "rewards/rejected": -288.02154541015625,
1468
+ "selected": 0.0,
1469
+ "step": 930
1470
+ },
1471
+ {
1472
+ "epoch": 0.98,
1473
+ "learning_rate": 3.760945397705828e-09,
1474
+ "logps/chosen": -39078.89453125,
1475
+ "logps/rejected": -32606.041015625,
1476
+ "loss": 0.0123,
1477
+ "neglected": 1066.0,
1478
+ "rewards/accuracies": 0.4312500059604645,
1479
+ "rewards/chosen": -388.01318359375,
1480
+ "rewards/margins": -64.55225372314453,
1481
+ "rewards/rejected": -323.4609375,
1482
+ "selected": 0.0,
1483
+ "step": 940
1484
+ },
1485
+ {
1486
+ "epoch": 0.99,
1487
+ "learning_rate": 4.1797599220405605e-10,
1488
+ "logps/chosen": -34646.1640625,
1489
+ "logps/rejected": -31544.458984375,
1490
+ "loss": 0.0168,
1491
+ "neglected": 1226.0,
1492
+ "rewards/accuracies": 0.48750001192092896,
1493
+ "rewards/chosen": -343.90411376953125,
1494
+ "rewards/margins": -30.896869659423828,
1495
+ "rewards/rejected": -313.00726318359375,
1496
+ "selected": 0.0,
1497
+ "step": 950
1498
+ },
1499
+ {
1500
+ "epoch": 1.0,
1501
+ "step": 955,
1502
+ "total_flos": 0.0,
1503
+ "train_loss": 0.11865393293933718,
1504
+ "train_runtime": 42177.7253,
1505
+ "train_samples_per_second": 1.449,
1506
+ "train_steps_per_second": 0.023
1507
+ }
1508
+ ],
1509
+ "logging_steps": 10,
1510
+ "max_steps": 955,
1511
+ "num_train_epochs": 1,
1512
+ "save_steps": 100,
1513
+ "total_flos": 0.0,
1514
+ "trial_name": null,
1515
+ "trial_params": null
1516
+ }