AWQ GEMM quant of TokenBender/pic_7B_mistral_Full_v0.2
pic_7B_mistral_Full_v0.2
PIC_7B_Mistral (First phase)
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 A curated, decontaminated subset of datasets used have been mentioned in the model card. All used datasets are public as of the time of release of this model.
Collaborate or Consult me - Twitter, Discord
Recommended format is ChatML, Alpaca will work but take care of EOT token
Chat Model Inference
Model description
First generic model of Project PIC (Partner-in-Crime) in 7B range. Trying a bunch of things and seeing what sticks right now.
Empathy + Coder + Instruction/json/function adherence is my game. Finding lots of challenges and insights in this effort, patience is key.
Intended uses & limitations
Should be useful in generic capacity. Demonstrates little bit of everything.
Basic tests in - Roleplay: Adherence to character present. json/function-calling: Passing Coding: To be evaluated
Training procedure
SFT + DPO
Training results
Humaneval and evalplus results to be shared as well.
Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 12
Model tree for SinanAkkoyun/pic_7B_mistral_Full_v0.2-awq
Base model
mistralai/Mistral-7B-v0.1