LaoRay commited on
Commit
32da1fc
1 Parent(s): 669612e

Model save

Browse files
Files changed (4) hide show
  1. README.md +84 -0
  2. all_results.json +9 -0
  3. train_results.json +9 -0
  4. trainer_state.json +0 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: alignment-handbook/zephyr-7b-sft-full
3
+ library_name: peft
4
+ license: apache-2.0
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: zephyr-7b-dpo-lora-r16-20k
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-dpo-lora-r16-20k
18
+
19
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.5370
22
+ - Rewards/chosen: -0.7888
23
+ - Rewards/rejected: -1.4748
24
+ - Rewards/accuracies: 0.7063
25
+ - Rewards/margins: 0.6860
26
+ - Logps/rejected: -395.5133
27
+ - Logps/chosen: -362.1219
28
+ - Logits/rejected: -2.5135
29
+ - Logits/chosen: -2.5568
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
+ - train_batch_size: 4
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - gradient_accumulation_steps: 4
54
+ - total_train_batch_size: 16
55
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
+ - lr_scheduler_type: cosine
57
+ - lr_scheduler_warmup_ratio: 0.1
58
+ - num_epochs: 1
59
+
60
+ ### Training results
61
+
62
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
63
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
64
+ | 0.6895 | 0.08 | 100 | 0.6896 | 0.0099 | 0.0028 | 0.6627 | 0.0072 | -247.7537 | -282.2447 | -2.8481 | -2.8901 |
65
+ | 0.653 | 0.16 | 200 | 0.6569 | -0.0133 | -0.0954 | 0.6865 | 0.0821 | -257.5692 | -284.5635 | -2.8339 | -2.8742 |
66
+ | 0.6385 | 0.24 | 300 | 0.6190 | -0.2742 | -0.4752 | 0.6905 | 0.2011 | -295.5536 | -310.6566 | -2.8031 | -2.8399 |
67
+ | 0.5689 | 0.32 | 400 | 0.6027 | -0.2972 | -0.5719 | 0.6944 | 0.2747 | -305.2159 | -312.9573 | -2.8083 | -2.8437 |
68
+ | 0.5689 | 0.4 | 500 | 0.5750 | -0.6614 | -1.0704 | 0.7242 | 0.4089 | -355.0662 | -349.3812 | -2.7152 | -2.7560 |
69
+ | 0.5884 | 0.48 | 600 | 0.5479 | -0.6965 | -1.2708 | 0.7123 | 0.5743 | -375.1053 | -352.8877 | -2.6322 | -2.6724 |
70
+ | 0.5366 | 0.56 | 700 | 0.5462 | -0.7254 | -1.3351 | 0.7123 | 0.6097 | -381.5439 | -355.7809 | -2.6144 | -2.6541 |
71
+ | 0.542 | 0.64 | 800 | 0.5451 | -0.6920 | -1.2686 | 0.7262 | 0.5766 | -374.8915 | -352.4363 | -2.5757 | -2.6163 |
72
+ | 0.5282 | 0.72 | 900 | 0.5412 | -0.7969 | -1.4275 | 0.7083 | 0.6306 | -390.7825 | -362.9279 | -2.5266 | -2.5716 |
73
+ | 0.5873 | 0.8 | 1000 | 0.5369 | -0.8233 | -1.5128 | 0.7083 | 0.6894 | -399.3072 | -365.5720 | -2.5254 | -2.5693 |
74
+ | 0.5152 | 0.88 | 1100 | 0.5384 | -0.7446 | -1.4196 | 0.7143 | 0.6749 | -389.9855 | -357.7025 | -2.5188 | -2.5620 |
75
+ | 0.5213 | 0.96 | 1200 | 0.5370 | -0.7888 | -1.4748 | 0.7063 | 0.6860 | -395.5133 | -362.1219 | -2.5135 | -2.5568 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - PEFT 0.12.0
81
+ - Transformers 4.44.0
82
+ - Pytorch 2.4.0+cu121
83
+ - Datasets 2.20.0
84
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.5874967678070069,
5
+ "train_runtime": 15864.6959,
6
+ "train_samples": 20000,
7
+ "train_samples_per_second": 1.261,
8
+ "train_steps_per_second": 0.079
9
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.5874967678070069,
5
+ "train_runtime": 15864.6959,
6
+ "train_samples": 20000,
7
+ "train_samples_per_second": 1.261,
8
+ "train_steps_per_second": 0.079
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff