ZeroGPU space
- gokaygokay/Reflection-70B-llamacpp
- Working Model
mattshumer/ref_70_e3
- Quantized Models
unsloth/Reflection-Llama-3.1-70B-GGUF
I think for a 0.22B size model it looks amazing. I saw some very recent 3B even 7B models worst than this and its highly fine-tunable. I've fine tuned it only with 3500 train samples under 15 minutes.
They've already fine-tuned the base model and it looks that its better at Segmentation and Object Detection with fine-tuned model. But captions are less detailed and short. Maybe thats a good thing about hallucinations but sometimes fine-tuned model gives almost no details. But for your question it looks like a fine-tunable model.
I've used this fine-tuning notebook.