|
--- |
|
language: en |
|
tags: |
|
- clip |
|
- vision |
|
- transformers |
|
- interpretability |
|
- sparse autoencoder |
|
- sae |
|
- mechanistic interpretability |
|
license: apache-2.0 |
|
library_name: torch |
|
pipeline_tag: feature-extraction |
|
metrics: |
|
- type: explained_variance |
|
value: 84.5 |
|
pretty_name: Explained Variance % |
|
range: |
|
min: 0 |
|
max: 100 |
|
- type: l0 |
|
value: 396.448 |
|
pretty_name: L0 |
|
--- |
|
|
|
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:8e-05 |
|
|
|
![Explained Variance](https://img.shields.io/badge/Explained%20Variance-84.5%25-blue) |
|
![Sparsity](https://img.shields.io/badge/Active%20Features-39644.8%-green) |
|
|
|
### Training Details |
|
|
|
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K) |
|
- Layer: 3 |
|
- Component: hook_resid_post |
|
|
|
### Model Architecture |
|
|
|
- Input Dimension: 768 |
|
- SAE Dimension: 49,152 |
|
- Expansion Factor: x64 (vanilla architecture) |
|
- Activation Function: ReLU |
|
- Initialization: encoder_transpose_decoder |
|
- Context Size: 50 tokens |
|
|
|
### Performance Metrics |
|
|
|
- L1 Coefficient: 8e-05 |
|
- L0 Sparsity: 396.4483 |
|
- Explained Variance: 0.8450 (84.50%) |
|
|
|
### Training Configuration |
|
|
|
- Learning Rate: 0.0004 |
|
- LR Scheduler: Cosine Annealing with Warmup (200 steps) |
|
- Epochs: 10 |
|
- Gradient Clipping: 1.0 |
|
- Device: NVIDIA Quadro RTX 8000 |
|
|
|
**Experiment Tracking:** |
|
- Weights & Biases Run ID: b5hrmjiy |
|
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/b5hrmjiy/overview |
|
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{2024josephsparseautoencoders, |
|
title={Sparse Autoencoders for CLIP-ViT-B-32}, |
|
author={Joseph, Sonia}, |
|
year={2024}, |
|
publisher={Prisma-Multimodal}, |
|
url={https://huggingface.co/Prisma-Multimodal}, |
|
note={Layer 3, hook_resid_post, Run ID: b5hrmjiy} |
|
} |
|
|