Finetuning Clip

#1
by mecha2019 - opened
  • In the notebook for finetuning clip model you load clip model and create lora_model.

model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32").to(device)
lora_model = get_peft_model(model, config)

  • Now in the training phase you've used original model for training instead of lora model.

model = train_model(
model, criterion, optimizer, num_epochs=num_epochs, scheduler=None
)

  • Also, after training you've saved lora_model instead of the finetuned model.
    lora_model.save_pretrained("lora-turkish-clip") # Save the model

Would you kindly clear this confusion for me.

Owner

Hello, sorry for the late response I just saw your message.
There shouldn't be any difference between training "model" or "lora_model" because using LoRA actually changes the layers of the model. You can see this by printing the model. You will see the new LoRA layers added to the "model". So when training they are practically the same.

However when saving if you save the "model" it will save the whole model (I am not sure if it is original + LoRA or just the original) but if you save the "lora_model" it will only save the LoRA model layers. So to summarize during training you can use both models, it doesn't create a difference but when saving it makes a difference.

Sign up or log in to comment