LouayYahyaoui commited on
Commit
523c87a
1 Parent(s): 2522f0d

add: llama for education

Browse files
README.md CHANGED
@@ -7,19 +7,19 @@ tags:
7
  - sft
8
  - generated_from_trainer
9
  model-index:
10
- - name: outputs
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- # outputs
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.8215
22
- - Model Preparation Time: 0.0166
23
 
24
  ## Model description
25
 
@@ -54,20 +54,21 @@ The following hyperparameters were used during training:
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
56
  |:-------------:|:------:|:----:|:---------------:|:----------------------:|
57
- | 2.1179 | 1.1765 | 20 | 2.0786 | 0.0166 |
58
- | 1.1724 | 2.3529 | 40 | 1.2198 | 0.0166 |
59
- | 0.9532 | 3.5294 | 60 | 0.9806 | 0.0166 |
60
- | 0.851 | 4.7059 | 80 | 0.8941 | 0.0166 |
61
- | 0.8294 | 5.8824 | 100 | 0.8571 | 0.0166 |
62
- | 0.7754 | 7.0588 | 120 | 0.8378 | 0.0166 |
63
- | 0.748 | 8.2353 | 140 | 0.8258 | 0.0166 |
64
- | 0.7357 | 9.4118 | 160 | 0.8224 | 0.0166 |
 
65
 
66
 
67
  ### Framework versions
68
 
69
  - PEFT 0.12.0
70
- - Transformers 4.44.0
71
  - Pytorch 2.1.2
72
  - Datasets 2.21.0
73
  - Tokenizers 0.19.1
 
7
  - sft
8
  - generated_from_trainer
9
  model-index:
10
+ - name: llama-edu
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # llama-edu
18
 
19
  This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.7934
22
+ - Model Preparation Time: 0.0161
23
 
24
  ## Model description
25
 
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
56
  |:-------------:|:------:|:----:|:---------------:|:----------------------:|
57
+ | 2.2418 | 1.0256 | 20 | 2.0564 | 0.0161 |
58
+ | 1.2511 | 2.0513 | 40 | 1.2006 | 0.0161 |
59
+ | 0.9906 | 3.0769 | 60 | 0.9552 | 0.0161 |
60
+ | 0.8046 | 4.1026 | 80 | 0.8716 | 0.0161 |
61
+ | 0.7893 | 5.1282 | 100 | 0.8376 | 0.0161 |
62
+ | 0.8012 | 6.1538 | 120 | 0.8167 | 0.0161 |
63
+ | 0.7734 | 7.1795 | 140 | 0.8038 | 0.0161 |
64
+ | 0.78 | 8.2051 | 160 | 0.7970 | 0.0161 |
65
+ | 0.7576 | 9.2308 | 180 | 0.7939 | 0.0161 |
66
 
67
 
68
  ### Framework versions
69
 
70
  - PEFT 0.12.0
71
+ - Transformers 4.44.2
72
  - Pytorch 2.1.2
73
  - Datasets 2.21.0
74
  - Tokenizers 0.19.1
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "up_proj",
24
  "down_proj",
25
- "k_proj",
26
- "o_proj",
27
  "v_proj",
28
  "q_proj",
29
- "gate_proj"
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "o_proj",
24
  "up_proj",
25
  "down_proj",
 
 
26
  "v_proj",
27
  "q_proj",
28
+ "gate_proj",
29
+ "k_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d0ea907eab6d40559ce9a423f2297c8ac366c66a13d69aa42eec515d31edb59b
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a5a3d7ea99698316cfe9750f9d69a4c1e3d16da3156448d4a6dfabcc5076dfa
3
  size 167832240
runs/Aug29_11-54-18_98b9c91667d9/events.out.tfevents.1724932510.98b9c91667d9.24.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64777f7f55a30ce0665d9c53705dd7b7207ccf108a865151972e8a898dabcb21
3
+ size 49164
runs/Aug29_11-54-18_98b9c91667d9/events.out.tfevents.1724937021.98b9c91667d9.24.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cdbe4aa13afbfe5de62c4bcae85aa61f4a410256329f182ba298c02e99eeb2e
3
+ size 425
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e45c6297c96123d2ce89b146263169109927e540ce178b90e34c2267c12daaa9
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83d76460e4773cf68a61695f04a192cec9ea8ab643f1317351e1e190ce4d24ab
3
  size 5432