File size: 572 Bytes
3397a07
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---

# Model Description

This repo contains ONNX exports for the CLIP model [M-CLIP/LABSE-Vit-L-14](https://huggingface.co/M-CLIP/LABSE-Vit-L-14). 
It separates the visual and textual encoders into separate models for the purpose of generating image and text embeddings.

This repo is specifically intended for use with [Immich](https://immich.app/), a self-hosted photo library.