Edit model card

I recently ran into this quantized model of Noromaid with the "rpcal" term on the end.
The secret sauce used an RP data set instead of the standard llama dataset to do the quantizing.
On a test drive, this seemed to make an enormous difference in the quality of the output, but I'm going to make a couple of my own to compare results.

EXL2 @ 4.00bpw
All credit to the original creators: Noromaid is hot.

image/png


Disclaimer:

This model is experimental, do not expect everything to work.

This model uses the Chatml prompting format


Beeg noromaid on steroids. Suitable for RP, ERP.

This model was trained on the Zloss fork of Charles, and should fix issue the model had.

Use Chatml prompt format, but not the special token.

The reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available HERE to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter.

Credits:

  • Undi
  • IkariDev

Description

This repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.

FP16 - by IkariDev and Undi

GGUF - by IkariDev and Undi

Ratings:

Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!

No ratings yet!

If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".

Prompt format: Chatml

<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>

Datasets used:

Others

Undi: If you want to support me, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.