File size: 962 Bytes
1d3b2df
d17c5ca
1d3b2df
 
 
 
 
 
 
 
 
 
 
1f981c4
1d3b2df
1f981c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
base_model: LeroyDyer/Mixtral_AI_MiniTron_Chat
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# LeroyDyer/Mixtral_AI_MiniTron_Chat AWQ

- Model creator: [LeroyDyer](https://huggingface.co/LeroyDyer)
- Original model: [Mixtral_AI_MiniTron_Chat](https://huggingface.co/LeroyDyer/Mixtral_AI_MiniTron_Chat)

## Model Summary

these little one are easy to train for task !!! ::

They already have some training (not great)
But they can take more and more

(and being MISTRAL they can takes lora modules!)

Rememeber to add training on to the lora you merge withit : ie load the lora and train a few cycle on the same data that was applied in the p=lora (ie 20 Steps ) and

See it it took hold then merge IT!

- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_MiniTron