Model save
Browse files- README.md +24 -25
- all_results.json +4 -4
- train_results.json +4 -4
- trainer_state.json +0 -0
README.md
CHANGED
@@ -5,7 +5,6 @@ license: apache-2.0
|
|
5 |
tags:
|
6 |
- trl
|
7 |
- dpo
|
8 |
-
- alignment-handbook
|
9 |
- generated_from_trainer
|
10 |
model-index:
|
11 |
- name: zephyr-7b-dpo-lora-r16-20k
|
@@ -19,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
19 |
|
20 |
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
|
21 |
It achieves the following results on the evaluation set:
|
22 |
-
-
|
23 |
-
-
|
24 |
-
-
|
25 |
-
-
|
26 |
-
-
|
27 |
-
-
|
28 |
-
-
|
29 |
-
-
|
30 |
-
-
|
31 |
|
32 |
## Model description
|
33 |
|
@@ -60,26 +59,26 @@ The following hyperparameters were used during training:
|
|
60 |
|
61 |
### Training results
|
62 |
|
63 |
-
| Training Loss | Epoch | Step |
|
64 |
-
|
65 |
-
| 0.
|
66 |
-
| 0.
|
67 |
-
| 0.
|
68 |
-
| 0.
|
69 |
-
| 0.
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
|
78 |
|
79 |
### Framework versions
|
80 |
|
81 |
- PEFT 0.12.0
|
82 |
- Transformers 4.44.0
|
83 |
-
- Pytorch 2.1.2+
|
84 |
- Datasets 2.21.0
|
85 |
- Tokenizers 0.19.1
|
|
|
5 |
tags:
|
6 |
- trl
|
7 |
- dpo
|
|
|
8 |
- generated_from_trainer
|
9 |
model-index:
|
10 |
- name: zephyr-7b-dpo-lora-r16-20k
|
|
|
18 |
|
19 |
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.5301
|
22 |
+
- Rewards/chosen: -0.7870
|
23 |
+
- Rewards/rejected: -1.4645
|
24 |
+
- Rewards/accuracies: 0.7202
|
25 |
+
- Rewards/margins: 0.6775
|
26 |
+
- Logps/rejected: -394.4780
|
27 |
+
- Logps/chosen: -361.9388
|
28 |
+
- Logits/rejected: -2.5045
|
29 |
+
- Logits/chosen: -2.5477
|
30 |
|
31 |
## Model description
|
32 |
|
|
|
59 |
|
60 |
### Training results
|
61 |
|
62 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|
63 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
|
64 |
+
| 0.6899 | 0.08 | 100 | 0.6897 | 0.0098 | 0.0028 | 0.6667 | 0.0070 | -247.7543 | -282.2605 | -2.8468 | -2.8890 |
|
65 |
+
| 0.6532 | 0.16 | 200 | 0.6569 | -0.0128 | -0.0950 | 0.6885 | 0.0822 | -257.5306 | -284.5143 | -2.8386 | -2.8782 |
|
66 |
+
| 0.6372 | 0.24 | 300 | 0.6181 | -0.2381 | -0.4406 | 0.6825 | 0.2026 | -292.0921 | -307.0444 | -2.8033 | -2.8402 |
|
67 |
+
| 0.5699 | 0.32 | 400 | 0.6034 | -0.2658 | -0.5383 | 0.6964 | 0.2725 | -301.8563 | -309.8138 | -2.7952 | -2.8319 |
|
68 |
+
| 0.5622 | 0.4 | 500 | 0.5688 | -0.5565 | -0.9794 | 0.7143 | 0.4229 | -345.9727 | -338.8872 | -2.6913 | -2.7320 |
|
69 |
+
| 0.5826 | 0.48 | 600 | 0.5457 | -0.5456 | -1.1188 | 0.7242 | 0.5732 | -359.9116 | -337.7992 | -2.6523 | -2.6907 |
|
70 |
+
| 0.5313 | 0.56 | 700 | 0.5387 | -0.7142 | -1.3304 | 0.7242 | 0.6162 | -381.0734 | -354.6571 | -2.6173 | -2.6586 |
|
71 |
+
| 0.5332 | 0.64 | 800 | 0.5386 | -0.7256 | -1.3351 | 0.7183 | 0.6096 | -381.5442 | -355.7965 | -2.5760 | -2.6167 |
|
72 |
+
| 0.5334 | 0.72 | 900 | 0.5368 | -0.7061 | -1.3229 | 0.7163 | 0.6168 | -380.3204 | -353.8529 | -2.5574 | -2.5999 |
|
73 |
+
| 0.5837 | 0.8 | 1000 | 0.5302 | -0.7953 | -1.4787 | 0.7163 | 0.6834 | -395.8991 | -362.7657 | -2.5273 | -2.5706 |
|
74 |
+
| 0.5144 | 0.88 | 1100 | 0.5327 | -0.7410 | -1.4021 | 0.7123 | 0.6611 | -388.2353 | -357.3381 | -2.5162 | -2.5586 |
|
75 |
+
| 0.5196 | 0.96 | 1200 | 0.5301 | -0.7870 | -1.4645 | 0.7202 | 0.6775 | -394.4780 | -361.9388 | -2.5045 | -2.5477 |
|
76 |
|
77 |
|
78 |
### Framework versions
|
79 |
|
80 |
- PEFT 0.12.0
|
81 |
- Transformers 4.44.0
|
82 |
+
- Pytorch 2.1.2+cu121
|
83 |
- Datasets 2.21.0
|
84 |
- Tokenizers 0.19.1
|
all_results.json
CHANGED
@@ -1,9 +1,9 @@
|
|
1 |
{
|
2 |
"epoch": 1.0,
|
3 |
"total_flos": 0.0,
|
4 |
-
"train_loss": 0.
|
5 |
-
"train_runtime":
|
6 |
"train_samples": 20000,
|
7 |
-
"train_samples_per_second":
|
8 |
-
"train_steps_per_second":
|
9 |
}
|
|
|
1 |
{
|
2 |
"epoch": 1.0,
|
3 |
"total_flos": 0.0,
|
4 |
+
"train_loss": 0.5873338260650635,
|
5 |
+
"train_runtime": 15803.1996,
|
6 |
"train_samples": 20000,
|
7 |
+
"train_samples_per_second": 1.266,
|
8 |
+
"train_steps_per_second": 0.079
|
9 |
}
|
train_results.json
CHANGED
@@ -1,9 +1,9 @@
|
|
1 |
{
|
2 |
"epoch": 1.0,
|
3 |
"total_flos": 0.0,
|
4 |
-
"train_loss": 0.
|
5 |
-
"train_runtime":
|
6 |
"train_samples": 20000,
|
7 |
-
"train_samples_per_second":
|
8 |
-
"train_steps_per_second":
|
9 |
}
|
|
|
1 |
{
|
2 |
"epoch": 1.0,
|
3 |
"total_flos": 0.0,
|
4 |
+
"train_loss": 0.5873338260650635,
|
5 |
+
"train_runtime": 15803.1996,
|
6 |
"train_samples": 20000,
|
7 |
+
"train_samples_per_second": 1.266,
|
8 |
+
"train_steps_per_second": 0.079
|
9 |
}
|
trainer_state.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|