Jlonge4 commited on
Commit
08873e9
1 Parent(s): 8430d55

Jlonge4/outputs

Browse files
README.md CHANGED
@@ -14,30 +14,12 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
19
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
20
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
21
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
22
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
23
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
24
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
25
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
26
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/0q2t3ek2)
27
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/dtvgakdw)
28
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/9hr7jcal)
29
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/goc9hcye)
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/fzw32mg0)
31
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/fzw32mg0)
32
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/fzw32mg0)
33
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/dphr4egm)
34
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/dphr4egm)
35
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/dphr4egm)
36
  # outputs
37
 
38
  This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
39
  It achieves the following results on the evaluation set:
40
- - Loss: 1.0926
41
 
42
  ## Model description
43
 
@@ -65,28 +47,38 @@ The following hyperparameters were used during training:
65
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
  - lr_scheduler_type: cosine_with_restarts
67
  - lr_scheduler_warmup_steps: 20
68
- - training_steps: 160
69
 
70
  ### Training results
71
 
72
  | Training Loss | Epoch | Step | Validation Loss |
73
  |:-------------:|:-------:|:----:|:---------------:|
74
- | 2.2453 | 1.1429 | 10 | 2.1914 |
75
- | 1.8068 | 2.2857 | 20 | 1.8497 |
76
- | 1.5702 | 3.4286 | 30 | 1.5758 |
77
- | 1.5012 | 4.5714 | 40 | 1.2568 |
78
- | 1.2486 | 5.7143 | 50 | 1.1309 |
79
- | 0.948 | 6.8571 | 60 | 1.0965 |
80
- | 1.0246 | 8.0 | 70 | 1.0826 |
81
- | 0.7834 | 9.1429 | 80 | 1.0786 |
82
- | 0.8802 | 10.2857 | 90 | 1.0755 |
83
- | 0.7285 | 11.4286 | 100 | 1.0781 |
84
- | 0.8049 | 12.5714 | 110 | 1.0855 |
85
- | 0.831 | 13.7143 | 120 | 1.0920 |
86
- | 0.6412 | 14.8571 | 130 | 1.0900 |
87
- | 0.7723 | 16.0 | 140 | 1.0898 |
88
- | 0.8869 | 17.1429 | 150 | 1.0908 |
89
- | 0.6173 | 18.2857 | 160 | 1.0926 |
 
 
 
 
 
 
 
 
 
 
90
 
91
 
92
  ### Framework versions
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/josh-longenecker1-groundedai/phi3.5-hallucination/runs/vn7nj2r3)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  # outputs
19
 
20
  This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.5219
23
 
24
  ## Model description
25
 
 
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: cosine_with_restarts
49
  - lr_scheduler_warmup_steps: 20
50
+ - training_steps: 260
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:-------:|:----:|:---------------:|
56
+ | 1.8679 | 1.1429 | 10 | 2.0237 |
57
+ | 1.3961 | 2.2857 | 20 | 1.5707 |
58
+ | 0.939 | 3.4286 | 30 | 1.1184 |
59
+ | 0.9957 | 4.5714 | 40 | 0.9883 |
60
+ | 0.8836 | 5.7143 | 50 | 0.9527 |
61
+ | 0.8069 | 6.8571 | 60 | 0.9431 |
62
+ | 0.6692 | 8.0 | 70 | 0.9416 |
63
+ | 0.7691 | 9.1429 | 80 | 0.9574 |
64
+ | 0.5804 | 10.2857 | 90 | 0.9505 |
65
+ | 0.395 | 11.4286 | 100 | 0.9772 |
66
+ | 0.3864 | 12.5714 | 110 | 1.0155 |
67
+ | 0.3433 | 13.7143 | 120 | 1.0573 |
68
+ | 0.4332 | 14.8571 | 130 | 1.0832 |
69
+ | 0.224 | 16.0 | 140 | 1.1592 |
70
+ | 0.1891 | 17.1429 | 150 | 1.2302 |
71
+ | 0.2235 | 18.2857 | 160 | 1.2603 |
72
+ | 0.1925 | 19.4286 | 170 | 1.3136 |
73
+ | 0.2264 | 20.5714 | 180 | 1.3556 |
74
+ | 0.1491 | 21.7143 | 190 | 1.4057 |
75
+ | 0.2421 | 22.8571 | 200 | 1.4966 |
76
+ | 0.1515 | 24.0 | 210 | 1.4495 |
77
+ | 0.1349 | 25.1429 | 220 | 1.5144 |
78
+ | 0.1493 | 26.2857 | 230 | 1.5340 |
79
+ | 0.1202 | 27.4286 | 240 | 1.5201 |
80
+ | 0.1154 | 28.5714 | 250 | 1.5305 |
81
+ | 0.1968 | 29.7143 | 260 | 1.5219 |
82
 
83
 
84
  ### Framework versions
adapter_config.json CHANGED
@@ -10,8 +10,8 @@
10
  "layers_pattern": null,
11
  "layers_to_transform": null,
12
  "loftq_config": {},
13
- "lora_alpha": 64,
14
- "lora_dropout": 0.1,
15
  "megatron_config": null,
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": null,
@@ -21,12 +21,12 @@
21
  "revision": null,
22
  "target_modules": [
23
  "up_proj",
24
- "v_proj",
25
- "q_proj",
26
- "o_proj",
27
- "down_proj",
28
  "k_proj",
29
- "gate_proj"
 
 
 
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
10
  "layers_pattern": null,
11
  "layers_to_transform": null,
12
  "loftq_config": {},
13
+ "lora_alpha": 128,
14
+ "lora_dropout": 0.2,
15
  "megatron_config": null,
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": null,
 
21
  "revision": null,
22
  "target_modules": [
23
  "up_proj",
 
 
 
 
24
  "k_proj",
25
+ "down_proj",
26
+ "gate_proj",
27
+ "o_proj",
28
+ "q_proj",
29
+ "v_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8c872fe43d7260fd00639758f062906f3d5a97b1db0c838fc60b914e973aee9
3
  size 142623480
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c47f5b555306903f26770a32ba84cdb3afdda4a92f30a5a6a0a28d0268ee956e
3
  size 142623480
runs/Sep08_03-30-31_111903198ea6/events.out.tfevents.1725766233.111903198ea6.3278.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91b014a21f74d49994be6a5f24ef4ae210a8bf47f20d7fe50d05847e4d52c272
3
+ size 69780
tokenizer.json CHANGED
@@ -1,6 +1,11 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 1024,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
  "padding": null,
10
  "added_tokens": [
11
  {
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb4d1b5e68a98620bd5d07cf5547d1546cb3035cdbf437944b92386c077aea6e
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68156363a28ce3d23f4f6fe3990ca6cd67bf6b5e851dc870c0ea8fd2a80adf41
3
  size 5432