tags: | |
- immich | |
- clip | |
# Model Description | |
This repo contains ONNX exports for the corresponding ViT-based CLIP model by OpenCLIP. See the [OpenCLIP](https://github.com/mlfoundations/open_clip) repo for more info. | |
Visual and textual encoders are separated into separate models for the purpose of generating image and text embeddings. | |
This repo is specifically intended for use with [Immich](https://immich.app/), a self-hosted photo library. | |