Could you upload tokenizer.model for this and other models?
#4
by
RonanMcGovern
- opened
Thanks
working on this
thanks yeah, I saw that, I wanted the tokenizer.model to make ggufs of fine-tunes. btw, llama.cpp is going to supporting quanting without the tokenizer.model so we're all set
RonanMcGovern
changed discussion status to
closed
GPTQ exllama also needs that tokenizer.model as well