Thanks. GGUF coming?

#1
by deleted - opened
deleted

I'm looking forward to testing Gemma-7b fine-tunes.

Gemma-7b-it is so excessively aligned that it not only refuses to respond to anything remotely contentious, but simple typos will often trigger alignment when asking about something as non-contentious as puppies.

Initially I was confused by how their instruct version could possibly score a full 10 points lower on the HF leaderboard than the Gemma-7b foundational model, but after using it for a few hours I'm confused how it didn't score even lower. To say they lobotomized it is an understatement.

My concern is that they also excessively aligned the foundational models by filtering tokens and changing weights so no amount of fine-tuning can turn it into a usable general purpose LLM.

Hey thank you for the request but this is only a small test for how to fine-tune gemma. More models should come from the community in the next days.

deleted changed discussion status to closed

Sign up or log in to comment