Ollama api setup for Qwen2

#35
by RagulMahendran - opened

Im using ollama to work on Qwen2. Created a post request for my local host, But getting no response. Tried with base64 & local files nothing works

Body:
{
"model": "qwen2",
"role": "user",
"content": [
{"type": "image", "image": "file:///C:/Users/BharathiragulMahendr/Downloads/Screenshots/AjayBIValidator/QS.png"},
{"type": "text", "text": "Describe this image."}
]
}

@RagulMahendran Qwen 2 is not the same thing as Qwen2-VL. llama.cpp needs to support Qwen 2's vision arch before it could work in ollama.

I have hosted a model using vLLM, does anyone know how to pass an image in curl command?

Please help me with this guys.

you have to pass base64 in an image URL in the curl command

Sign up or log in to comment