This is not a 7B. It's a ~9B. Please label appropriately.
Like several of the top '7B' models on the leaderboard, this is actually a (roughly) 9B, downstream of https://huggingface.co/zyh3826/GML-Mistral-merged-v1, a merge that combined the first 32 layers (ie all of them) of one Mistral-7B finetune with the last 8 layers of another Mistral finetune, creating a model that is about 9B parameters.
It is helpful to label model sizes appropriately. Better would be if Huggingface labeled models based on their file size and bpw, instead of allowing for these sorts of mistakes to occur and proliferate, as one mislabeled model begets others derived from it.
Apologies for this. I will admit I'm new to this all. I don't remember declaring the size anywhere, and on the model card it says it is 8.99B.
In fact, on the LLM leaderboard submission page (as well as the model card page), I don't see where I would have set it to 7B.
If you let me know where I can correct this, I'm happy to do so.
Thanks
Yeah AFAIK you can't set it manually they try to auto-detect it. 9B is closer to 7 than 13 so its set to 7
Sorry, on the Open LLM Leaderboard. It says 8.99B in SafeTensors