Spaces:
Running
on
CPU Upgrade
dynamic rope_scaling on internlm2_5-7b llamafied
Hi, I converted https://huggingface.co/internlm/internlm2_5-7b to Llama safetensors so that it doesn't need use_remote_code: https://huggingface.co/ethanc8/internlm2_5-7b-llamafied. However, when submitting it to the leaderboard, I got:
Model "ethanc8/internlm2_5-7b-llamafied" was not found or misconfigured on the hub! Error raised was rope_type
I'm not sure, but this might be related to my config.json:
"rope_scaling": {
"factor": 2.0,
"type": "dynamic"
},
Hi!
I did not manage to reproduce your error, could you provide the full parameters you used to submit?
I don't have all the paramaters, but I used https://huggingface.co/ethanc8/internlm2_5-7b-llamafied and specified bfloat16.
Can you try again, and provide me with 1) all the parameters of the form and 2) a screenshot of your screen with the full params + error message?
I will try later today.
Hi!
Thanks for the information, I managed to reproduce the error.
We are using transformers=4.43.1 on the leaderboard, did you make sure it was possible to load your model using AutoConfig (as indicated in the submit tab)?
The following
AutoConfig.from_pretrained('ethanc8/internlm2_5-7b-llamafied', revision='58482e6989cb3b09da8f20d6ab9101b922c53acb')
fails.
I will try that.
If your model could load in earlier versions of transformers, I invite you to open an issue on the github repo, and if not, to fix your model.
Closing as we can't do anything for the moment, but ping me once it's good!