Fine-tuning scripts for Llama3.2-Vision series
#27
by
2U1
- opened
https://github.com/2U1/Llama3.2-Vision-Ft
I made a code for fine-tuning Llama3.2-Vision.
It can use
- LoRA/QLoRA
- Felxible for freezing modules
- setting different learning rates for each modules
However It need to be developed for some other features. Feedbacks and issues are welcome.
PRs and helps are also welcome!
Meta has released fine-tuning recipes for vision models as well. Check it out - https://github.com/meta-llama/llama-recipes/blob/main/recipes/quickstart/finetuning/finetune_vision_model.md this might help improve your recipe as well.
@doitbuildit @Sanyam Thanks I'll take a look at it.