Model trained to be as helpful of an assistant as possible.
Data split percentage
60% coding 10% conversations 20% instructions 10% Roleplay
Now obviously the data contains other elements but this is the biggest part
Examples
The prompt structure can be almost anything, this model has been trained on 0.6 million instruction, which is less than dante.
This model has however been trained on a much cleaner and organized dataset and has been retrained multiple times to reach as low
of a training loss as possible.
## Examples.
"You are an AI assistant respond to human in a helpful manner.
HM: What were the causes for world war 2?
"
"Act like a detective from the 1900s, respond to mike in a helpful manner.
HM: What were the causes for world war 2?
"
The prompt also works with alpaca structure. I have purposefully trained it so that it should work like this.
EOS token is <|end|>.
Remember to tell it how it should act for best effect.
More information
The base model is GPT-NeoX taken pretrained by redpajama.
We managed to reach 0.45 Validation loss with a 0.3 learning loss.
You are not allowed to use this for commerical purposes unless reaching an agreement with the creator @Dampish, @Dampish#3607 on discord.
The model can easily be further fine tuned to most languages.
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.