GGUF
I uploaded the GGUF Quantization of the model here: https://huggingface.co/RDson/llava-llama-3-8b-v1_1-GGUF
You are correct, I completely forgot about that 🤦
No worries, I have attempted to create an mmproj and run up against issues, I have posted here with more information: https://huggingface.co/xtuner/llava-llama-3-8b-v1_1/discussions/3
I'll keep the files up in case anyone for whatever reason needs them
How to run this with Ollama?
Hi, On ollama I encountered this problem: when I make it interpret the first image everything is ok, the model works correctly, but if I make it interpret a second image it starts interpreting the previous image, how can I overcome this problem?
How to run this with Ollama?
simply create a new modelfile for ollama,
ollama's documentation explains how to do it, it's not complicated..
ollama create llm_name -f modelfile.txt