Feedback.

#1
by EloyOn - opened

The model card said that the model can write huge walls of text but I mainly use the models for chatting, so I didn't know if it would give me a huge wall of text like a lot of models do nowadays.

It didn't, and usually replies with one paragraph, more or less the lenght I use, which it's very natural and easy to interact with. I teased her about the idea of doing unethical things (related with visiting a bank) and complied, suggesting even worse things of her own.

It also got pretty naughty when things hinted in that direction.

For now I'm very pleased with your model, good job. I've been using 12B models lately, but I'll keep using yours to test it further, after all, it's faster on smartphone than a 12B, and with ARM quants the battery almost don't even get hot if using an 8B.

That's great to hear!

About the 'huge walls of text'- It can, when prompted πŸ™ƒ

UGI's rating dropped. 10 out of 10 uncensored model, congrats! You rank 4th if selected from the W/10 parameter.

It could do with more internet knowledge, though. Around 20 in that metric is a good general knowledge base, but all those books had to go somewhere I guess.

It's rated as a 3B model instead of 8B.

Ah that explains why I couldn't see it in the 8B category πŸ€”

I appreciate the feedback, thank you!

It keeps trying to bring up someone called "Foxtail" using ChatML it also outputs literal "assistant\n"

It keeps trying to bring up someone called "Foxtail" using ChatML it also outputs literal "assistant\n"

I use Llama 3 preset. No problem whatsoever, although you probably have tested the model longer than I.

Make sure to use the recommended settings, also you can read more about it here:
https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/

It keeps trying to bring up someone called "Foxtail" using ChatML it also outputs literal "assistant\n"

I use Llama 3 preset. No problem whatsoever, although you probably have tested the model longer than I.

That fixed it! Amazing Model!

It keeps trying to bring up someone called "Foxtail" using ChatML it also outputs literal "assistant\n"

I use Llama 3 preset. No problem whatsoever, although you probably have tested the model longer than I.

That fixed it! Amazing Model!

Glad I could help.

For what I've experienced after using dozens of L3 fine-tunes, training that model in ChatML is an error. Llama 3 runs better with it's own preset, being ChatML usually problematic.

True, but chatML sometimes gives different responses, assistant bias and all of that.
Also, it uses a specific type of chatml (more details available in the model json files, for those who tinker).

But yeah, llama3 preset will give the most stable outputs for sure.

Sign up or log in to comment