Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,24 @@ pipeline_tag: text-generation
|
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
-
#
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
+
# LeroyDyer/Mixtral_AI_MiniTron_Chat AWQ
|
14 |
|
15 |
+
- Model creator: [LeroyDyer](https://huggingface.co/LeroyDyer)
|
16 |
+
- Original model: [Mixtral_AI_MiniTron_Chat](https://huggingface.co/LeroyDyer/Mixtral_AI_MiniTron_Chat)
|
17 |
+
|
18 |
+
## Model Summary
|
19 |
+
|
20 |
+
these little one are easy to train for task !!! ::
|
21 |
+
|
22 |
+
They already have some training (not great)
|
23 |
+
But they can take more and more
|
24 |
+
|
25 |
+
(and being MISTRAL they can takes lora modules!)
|
26 |
+
|
27 |
+
Rememeber to add training on to the lora you merge withit : ie load the lora and train a few cycle on the same data that was applied in the p=lora (ie 20 Steps ) and
|
28 |
+
|
29 |
+
See it it took hold then merge IT!
|
30 |
+
|
31 |
+
- **Developed by:** LeroyDyer
|
32 |
+
- **License:** apache-2.0
|
33 |
+
- **Finetuned from model :** LeroyDyer/Mixtral_AI_MiniTron
|