ironrock commited on
Commit
faef757
1 Parent(s): 2ab9240

Model save

Browse files
Files changed (2) hide show
  1. README.md +81 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - SFT
8
+ - WeniGPT
9
+ - generated_from_trainer
10
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
11
+ datasets:
12
+ - generator
13
+ model-index:
14
+ - name: WeniGPT-Agents-Mistral-1.0.11-SFT
15
+ results: []
16
+ ---
17
+
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+ # WeniGPT-Agents-Mistral-1.0.11-SFT
22
+
23
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
24
+ It achieves the following results on the evaluation set:
25
+ - Loss: 1.1287
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 0.0002
45
+ - train_batch_size: 1
46
+ - eval_batch_size: 1
47
+ - seed: 42
48
+ - distributed_type: multi-GPU
49
+ - num_devices: 4
50
+ - gradient_accumulation_steps: 2
51
+ - total_train_batch_size: 8
52
+ - total_eval_batch_size: 4
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: linear
55
+ - lr_scheduler_warmup_ratio: 0.03
56
+ - training_steps: 330
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss |
61
+ |:-------------:|:------:|:----:|:---------------:|
62
+ | 1.0466 | 0.5357 | 30 | 1.1547 |
63
+ | 0.8783 | 1.0714 | 60 | 1.1288 |
64
+ | 0.6778 | 1.6071 | 90 | 1.1287 |
65
+ | 0.4609 | 2.1429 | 120 | 1.1453 |
66
+ | 0.3946 | 2.6786 | 150 | 1.1766 |
67
+ | 0.2669 | 3.2143 | 180 | 1.2127 |
68
+ | 0.2871 | 3.75 | 210 | 1.2313 |
69
+ | 0.1831 | 4.2857 | 240 | 1.2560 |
70
+ | 0.2145 | 4.8214 | 270 | 1.2665 |
71
+ | 0.167 | 5.3571 | 300 | 1.2968 |
72
+ | 0.1569 | 5.8929 | 330 | 1.2957 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - PEFT 0.10.0
78
+ - Transformers 4.40.0
79
+ - Pytorch 2.1.0+cu118
80
+ - Datasets 2.18.0
81
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7e2e26e410530a9f96bee0b327349cc65885db1a232c795fa23146192c810b5a
3
  size 335604696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2bee445e149d5e25674f3101fd5125f8560a30f2a60f33c4354dd3c5f16a54c
3
  size 335604696