Model Card
Model Details
- Architecture: ViT-Large with patch size 14
- Training Data: cifar10 dataset
Training Details
Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32). Only the vision encoder is fine-tuned.
Evaluation Results
- pre-trained: 0.9557000398635864
- fine-tuned: 0.9912999868392944
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for tanganke/clip-vit-large-patch14_cifar10
Base model
openai/clip-vit-large-patch14