How to convert from safetensors to GGUF?
You did a great job! I'd like to learn how to do it by myself.
Tried to use llama.cpp convert.py, Got follow error:
assert 0 <= begin <= end <= len(byte_buf)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Similar problem. I attempt to convert with python convert.py --outtype f16 --outfile momo70.fp16.gguf ./models/momo70/ --vocab-type bpe --pad-vocab
and get:
Padding vocab with 213 token(s) - through
Traceback (most recent call last):
File "/home/llama.cpp/convert.py", line 1658, in
main(sys.argv[1:]) # Exclude the first element (script name) from sys.argv
File "/home/llama.cpp/convert.py", line 1643, in main
OutputFile.write_all(
File "/home/llama.cpp/convert.py", line 1188, in write_all
check_vocab_size(params, vocab, pad_vocab=pad_vocab)
File "/home/llama.cpp/convert.py", line 1008, in check_vocab_size
vocab.added_tokens_dict[f"<dummy{i:05}>"] = -1
AttributeError: 'BpeVocab' object has no attribute 'added_tokens_dict'. Did you mean: 'added_tokens_list'?
Weird!
Were can I check what --vocab-type to use?