kevinoli's picture
End of training
7b0965b verified
|
raw
history blame
3.04 kB
metadata
library_name: transformers
base_model: openai/clip-vit-large-patch14-336
tags:
  - generated_from_trainer
model-index:
  - name: clip-finetuned-csu-p14-336-e3l55-l
    results: []

clip-finetuned-csu-p14-336-e3l55-l

This model is a fine-tuned version of openai/clip-vit-large-patch14-336 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9056

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8.810707926567202e-07
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss
0.7336 0.0921 500 2.0781
0.6953 0.1842 1000 2.0783
0.6953 0.2763 1500 2.0777
0.6932 0.3684 2000 2.0722
0.6907 0.4605 2500 2.0792
0.7043 0.5526 3000 2.0205
0.7078 0.6447 3500 2.1256
0.7034 0.7368 4000 2.0000
0.6671 0.8289 4500 2.1608
0.6899 0.9210 5000 2.0940
0.6649 1.0131 5500 1.9776
0.6748 1.1052 6000 1.9852
0.6456 1.1973 6500 1.9981
0.6514 1.2894 7000 1.9637
0.6179 1.3815 7500 2.0034
0.6561 1.4736 8000 2.2865
0.6328 1.5657 8500 2.4808
0.7053 1.6578 9000 2.1610
0.6533 1.7499 9500 1.9845
0.6594 1.8420 10000 1.9895
0.6597 1.9341 10500 2.0103
0.6435 2.0262 11000 2.0589
0.6404 2.1183 11500 2.0517
0.6348 2.2104 12000 1.9749
0.6545 2.3024 12500 1.9681
0.6323 2.3945 13000 1.9056
0.6045 2.4866 13500 1.9470
0.6492 2.5787 14000 2.0177
0.6512 2.6708 14500 1.9483
0.6382 2.7629 15000 1.9499
0.6203 2.8550 15500 1.9529
0.6241 2.9471 16000 1.9817

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.21.0
  • Tokenizers 0.19.1