statking commited on
Commit
36e4b37
1 Parent(s): 58552be

Model save

Browse files
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - orpo
7
+ - generated_from_trainer
8
+ base_model: meta-llama/Meta-Llama-3-70B-Instruct
9
+ model-index:
10
+ - name: Meta-Llama-3-70B-Instruct
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/statking/huggingface/runs/f61fvw8u)
18
+ # Meta-Llama-3-70B-Instruct
19
+
20
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.2884
23
+ - Rewards/chosen: -0.0888
24
+ - Rewards/rejected: -0.1138
25
+ - Rewards/accuracies: 0.6132
26
+ - Rewards/margins: 0.0250
27
+ - Logps/rejected: -1.1382
28
+ - Logps/chosen: -0.8884
29
+ - Logits/rejected: -0.0033
30
+ - Logits/chosen: 0.2012
31
+ - Nll Loss: 1.2075
32
+ - Log Odds Ratio: -0.6278
33
+ - Log Odds Chosen: 0.3768
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 5e-06
53
+ - train_batch_size: 1
54
+ - eval_batch_size: 1
55
+ - seed: 42
56
+ - distributed_type: multi-GPU
57
+ - num_devices: 4
58
+ - gradient_accumulation_steps: 4
59
+ - total_train_batch_size: 16
60
+ - total_eval_batch_size: 4
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: cosine
63
+ - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 1
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
69
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
70
+ | 1.2483 | 0.9999 | 3555 | 1.2884 | -0.0888 | -0.1138 | 0.6132 | 0.0250 | -1.1382 | -0.8884 | -0.0033 | 0.2012 | 1.2075 | -0.6278 | 0.3768 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - PEFT 0.11.1
76
+ - Transformers 4.41.0
77
+ - Pytorch 2.2.0+cu121
78
+ - Datasets 2.19.1
79
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f5d32d4ba2e938b485a104c6c41413d99b30d8307d56a949ccf0cf886db13979
3
  size 4410006192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30a1b0bb90dc98b89fc103bd8b6ea4a493f504492bb8806dc201b65b0e77f141
3
  size 4410006192
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9999296814570002,
3
+ "total_flos": 0.0,
4
+ "train_loss": 1.4252014341233652,
5
+ "train_runtime": 102484.8577,
6
+ "train_samples": 56881,
7
+ "train_samples_per_second": 0.555,
8
+ "train_steps_per_second": 0.035
9
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9999296814570002,
3
+ "total_flos": 0.0,
4
+ "train_loss": 1.4252014341233652,
5
+ "train_runtime": 102484.8577,
6
+ "train_samples": 56881,
7
+ "train_samples_per_second": 0.555,
8
+ "train_steps_per_second": 0.035
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff