End of training
Browse files- README.md +30 -1
- adapter_model.bin +3 -0
README.md
CHANGED
@@ -2,6 +2,7 @@
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
|
|
5 |
- generated_from_trainer
|
6 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
7 |
model-index:
|
@@ -86,7 +87,9 @@ weight_decay: 0.0
|
|
86 |
|
87 |
# empower-functions-more-tools-diverse-data-adds-one-more-function
|
88 |
|
89 |
-
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on
|
|
|
|
|
90 |
|
91 |
## Model description
|
92 |
|
@@ -119,6 +122,32 @@ The following hyperparameters were used during training:
|
|
119 |
- lr_scheduler_warmup_steps: 10
|
120 |
- num_epochs: 1
|
121 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
122 |
### Framework versions
|
123 |
|
124 |
- PEFT 0.9.0
|
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
5 |
+
- axolotl
|
6 |
- generated_from_trainer
|
7 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
8 |
model-index:
|
|
|
87 |
|
88 |
# empower-functions-more-tools-diverse-data-adds-one-more-function
|
89 |
|
90 |
+
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
|
91 |
+
It achieves the following results on the evaluation set:
|
92 |
+
- Loss: 0.0873
|
93 |
|
94 |
## Model description
|
95 |
|
|
|
122 |
- lr_scheduler_warmup_steps: 10
|
123 |
- num_epochs: 1
|
124 |
|
125 |
+
### Training results
|
126 |
+
|
127 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
128 |
+
|:-------------:|:-----:|:----:|:---------------:|
|
129 |
+
| 2.2516 | 0.0 | 1 | 2.1498 |
|
130 |
+
| 0.1342 | 0.05 | 25 | 0.1461 |
|
131 |
+
| 0.1297 | 0.1 | 50 | 0.1167 |
|
132 |
+
| 0.1098 | 0.15 | 75 | 0.1080 |
|
133 |
+
| 0.0895 | 0.2 | 100 | 0.1025 |
|
134 |
+
| 0.0985 | 0.25 | 125 | 0.1007 |
|
135 |
+
| 0.0987 | 0.3 | 150 | 0.0984 |
|
136 |
+
| 0.0988 | 0.35 | 175 | 0.0971 |
|
137 |
+
| 0.0989 | 0.4 | 200 | 0.0947 |
|
138 |
+
| 0.1109 | 0.45 | 225 | 0.0937 |
|
139 |
+
| 0.0957 | 0.5 | 250 | 0.0934 |
|
140 |
+
| 0.1038 | 0.55 | 275 | 0.0924 |
|
141 |
+
| 0.0969 | 0.6 | 300 | 0.0917 |
|
142 |
+
| 0.096 | 0.65 | 325 | 0.0901 |
|
143 |
+
| 0.0893 | 0.7 | 350 | 0.0897 |
|
144 |
+
| 0.0768 | 0.75 | 375 | 0.0887 |
|
145 |
+
| 0.0848 | 0.8 | 400 | 0.0882 |
|
146 |
+
| 0.0854 | 0.85 | 425 | 0.0878 |
|
147 |
+
| 0.083 | 0.9 | 450 | 0.0874 |
|
148 |
+
| 0.0868 | 0.95 | 475 | 0.0873 |
|
149 |
+
|
150 |
+
|
151 |
### Framework versions
|
152 |
|
153 |
- PEFT 0.9.0
|
adapter_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ecf002fce2ee15eeda5fe9002c09ee878ca0bc22cfc18e34b2c2c431c46f6b90
|
3 |
+
size 109144714
|