Spaces:
Running
got error: " Unexpected end of JSON input"
hi, you solve this sir? regards
You need to create a .env.local
file. Copy MODEL_ENDPOINTS
from .env
, but change hf_<access token>
with your own access token from https://huggingface.co/settings/tokens
Hi, yes thanks you, now I have error Cannot read properties of undefined (reading 'special') I set PUBLIC_MAX_INPUT_TOKENS=1000 like you say in other post, but same error. How can I make debuh in visual studio but breakpoints in +server.ts no sync.
Can you pull the latest change and do npm install
agian? You should have a more descriptive error
yes, it works now, thank you very much. Now I want to debug with breakpoints but it doesn't recognize the breaks that I mark in the code, do I need a configuration in the visual code?
- After some messages sent, it give error, maybe is just the model, Error: Input validation error:
inputs
tokens +max_new_tokens
must be <= 1512. Given: 505inputs
tokens and 1024max_new_tokens
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:160:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:86:26)
things fixed, now works fine.
Hi, Now I want to debug with breakpoints but it doesn't recognize the breaks that I mark in the code, do I need a configuration in the visual code?
- After some messages sent, it give error, maybe is just the model, Error: Input validation error:
inputs
tokens +max_new_tokens
must be <= 1512. Given: 505inputs
tokens and 1024max_new_tokens
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:160:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:86:26)
I believe this is the issue of Continuous dialogue ability. if we want it to remember the context, it will save all messages, that will make token up and up again.
start a new session will resolve the issue.
As my unserstanding, if we want to increase max_tokens, we need more VRAM of GPU.