Models and dataset of Safe-CLIP: https://arxiv.org/abs/2311.16254
AImageLab
university
AI & ML interests
None defined yet.
Collections
2
LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1
-
aimagelab/LLaVA_MORE-llama_3_1-8B-pretrain
Image-Text-to-Text • Updated • 40 -
aimagelab/LLaVA_MORE-llama_3_1-8B-finetuning
Image-Text-to-Text • Updated • 460 • 5 -
aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-pretrain
Image-Text-to-Text • Updated • 19 -
aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-finetuning
Image-Text-to-Text • Updated • 16 • 1
models
12
aimagelab/LLaVA_MORE-llama_3_1-8B-S2-siglip-finetuning
Image-Text-to-Text
•
Updated
•
22
•
2
aimagelab/LLaVA_MORE-llama_3_1-8B-S2-finetuning
Image-Text-to-Text
•
Updated
•
16
aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-finetuning
Image-Text-to-Text
•
Updated
•
16
•
1
aimagelab/LLaVA_MORE-llama_3_1-8B-S2-siglip-pretrain
Image-Text-to-Text
•
Updated
•
27
aimagelab/LLaVA_MORE-llama_3_1-8B-siglip-pretrain
Image-Text-to-Text
•
Updated
•
19
aimagelab/LLaVA_MORE-llama_3_1-8B-S2-pretrain
Image-Text-to-Text
•
Updated
•
20
aimagelab/LLaVA_MORE-llama_3_1-8B-pretrain
Image-Text-to-Text
•
Updated
•
40
aimagelab/LLaVA_MORE-llama_3_1-8B-finetuning
Image-Text-to-Text
•
Updated
•
460
•
5
aimagelab/safeclip_vit-l_14
Text-to-Image
•
Updated
•
706
•
3
aimagelab/safeclip_vit-l_14_336
Text-to-Image
•
Updated
•
69