Edit model card

This is Mistral-v0.1 and a combination of the AIRIC dataset sprinkled into the other datasets listed. Trained for 3 epochs at rank 128 until loss hit about 1.37. I noticed some "it's important to remembers" in there that I may try to scrub out but otherwise the model wasn't intentionally censored.

The intent was to create a robot that I could converse with as well as use as an assistant. If you ask it what it's up to, it'll make something up as if it actually had a life with the right parameters. Before releasing it, I mixed in a lot of OpenOrca data vs what I put out as a chatbot originally to make it more genuinely useful. Set the top_p to .98 to get the most social results.

This was the original post: https://www.reddit.com/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/

This is how I did the data extraction: https://www.linkedin.com/pulse/how-i-trained-ai-my-text-messages-make-robot-talks-like-eric-polewski-9nu1c/

This is an instruct model trained in the Alpaca format.

5-bit exl2 available at https://huggingface.co/ericpolewski/AIRIC-The-Mistral-5.0bpw-exl2

8-bit exl2 available at https://huggingface.co/ericpolewski/AIRIC-The-Mistral-8.0bpw-exl2

Downloads last month
725
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train ericpolewski/AIRIC-The-Mistral

Spaces using ericpolewski/AIRIC-The-Mistral 5