Post
1597
How to Extract LoRA from FLUX Fine Tuning / DreamBooth Training Full Tutorial and Comparison Between Fine Tuning vs Extraction vs LoRA Training
Full article is here public post : https://www.patreon.com/posts/112335162
This was short on length so check out the full article - public post
Conclusions as below
Conclusions
With same training dataset (15 images used), same number of steps (all compared trainings are 150 epoch thus 2250 steps), almost same training duration, Fine Tuning / DreamBooth training of FLUX yields the very best results
So yes Fine Tuning is the much better than LoRA training itself
Amazing resemblance, quality with least amount of overfitting issue
Moreover, extracting a LoRA from Fine Tuned full checkpoint, yields way better results from LoRA training itself
Extracting LoRA from full trained checkpoints were yielding way better results in SD 1.5 and SDXL as well
Comparison of these 3 is made in Image 5 (check very top of the images to see)
640 Network Dimension (Rank) FP16 LoRA takes 6.1 GB disk space
You can also try 128 Network Dimension (Rank) FP16 and different LoRA strengths during inference to make it closer to Fine Tuned model
Moreover, you can try Resize LoRA feature of Kohya GUI but hopefully it will be my another research and article later
Image Raw Links
Image 1 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 2 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 3 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 4 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 5 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Full article is here public post : https://www.patreon.com/posts/112335162
This was short on length so check out the full article - public post
Conclusions as below
Conclusions
With same training dataset (15 images used), same number of steps (all compared trainings are 150 epoch thus 2250 steps), almost same training duration, Fine Tuning / DreamBooth training of FLUX yields the very best results
So yes Fine Tuning is the much better than LoRA training itself
Amazing resemblance, quality with least amount of overfitting issue
Moreover, extracting a LoRA from Fine Tuned full checkpoint, yields way better results from LoRA training itself
Extracting LoRA from full trained checkpoints were yielding way better results in SD 1.5 and SDXL as well
Comparison of these 3 is made in Image 5 (check very top of the images to see)
640 Network Dimension (Rank) FP16 LoRA takes 6.1 GB disk space
You can also try 128 Network Dimension (Rank) FP16 and different LoRA strengths during inference to make it closer to Fine Tuned model
Moreover, you can try Resize LoRA feature of Kohya GUI but hopefully it will be my another research and article later
Image Raw Links
Image 1 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 2 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 3 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 4 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests
Image 5 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests