Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference

changing "max_seq_len" has no effect

#64
by giuliogalvan - opened

It seems to me that changing "max_seq_len" in "configs.json" has no effect. I still get warnings about the limit being set to 2048 and the output is gibberish.

Token indices sequence length is longer than the specified maximum sequence length for this model (3651 > 2048). Running this sequence through the model will result in indexing errors

Sign up or log in to comment