Extra "assistnat\n\n" at the beginning of the output
I am trying to get some inference from llama-3.2-3B_instruct, using chat messages. In my tests I sometimes get an extra "assistant\n\n" at the beginning of the "generated_text". Anyone have an idea what could be the reason? I am using the same setup for other HF models just fine.
e.g.:
prompt:
prompt=[{'role': 'user', 'content': "Write a template for a chat bot that takes a user's location and gives them the weather forecast. Use the letter o as a keyword in the syntax of the template. The letter o should appear at least 6 times.. Your response should contain fewer than 6 sentences. Highlight at least 2 text sections, i.e. highlighted section."}]
output:
[{'generated_text': 'assistant\n\nWeather Forecast Template\n\nTo get the weather forecast, please ....
do you have MESSAGE_API_ENABLED or are you inputting the prompt as inputs which is a string?