soniajoseph
commited on
Commit
•
c0dbe67
1
Parent(s):
f69cf71
Update README.md
Browse files
README.md
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- clip
|
5 |
+
- vision
|
6 |
+
- transformers
|
7 |
+
- interpretability
|
8 |
+
- sparse autoencoder
|
9 |
+
- sae
|
10 |
+
- mechanistic interpretability
|
11 |
+
license: apache-2.0
|
12 |
+
library_name: torch
|
13 |
+
pipeline_tag: feature-extraction
|
14 |
+
metrics:
|
15 |
+
- type: explained_variance
|
16 |
+
value: 88.4
|
17 |
+
pretty_name: Explained Variance %
|
18 |
+
range:
|
19 |
+
min: 0
|
20 |
+
max: 100
|
21 |
+
- type: l0
|
22 |
+
value: 695.850
|
23 |
+
pretty_name: L0
|
24 |
+
---
|
25 |
+
|
26 |
+
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:5e-05
|
27 |
+
|
28 |
+
![Explained Variance](https://img.shields.io/badge/Explained%20Variance-88.4%25-blue)
|
29 |
+
![Sparsity](https://img.shields.io/badge/Active%20Features-69585.0%-green)
|
30 |
+
|
31 |
+
### Training Details
|
32 |
+
|
33 |
+
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
|
34 |
+
- Layer: 6
|
35 |
+
- Component: hook_resid_post
|
36 |
+
|
37 |
+
### Model Architecture
|
38 |
+
|
39 |
+
- Input Dimension: 768
|
40 |
+
- SAE Dimension: 49,152
|
41 |
+
- Expansion Factor: x64 (vanilla architecture)
|
42 |
+
- Activation Function: ReLU
|
43 |
+
- Initialization: encoder_transpose_decoder
|
44 |
+
- Context Size: 50 tokens
|
45 |
+
|
46 |
+
### Performance Metrics
|
47 |
+
|
48 |
+
- L1 Coefficient: 5e-05
|
49 |
+
- L0 Sparsity: 695.8499
|
50 |
+
- Explained Variance: 0.8843 (88.43%)
|
51 |
+
|
52 |
+
### Training Configuration
|
53 |
+
|
54 |
+
- Learning Rate: 0.0004
|
55 |
+
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
|
56 |
+
- Epochs: 10
|
57 |
+
- Gradient Clipping: 1.0
|
58 |
+
- Device: NVIDIA Quadro RTX 8000
|
59 |
+
|
60 |
+
**Experiment Tracking:**
|
61 |
+
- Weights & Biases Run ID: jffduqsa
|
62 |
+
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/jffduqsa/overview
|
63 |
+
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
|
64 |
+
|
65 |
+
## Citation
|
66 |
+
|
67 |
+
```bibtex
|
68 |
+
@misc{2024josephsparseautoencoders,
|
69 |
+
title={Sparse Autoencoders for CLIP-ViT-B-32},
|
70 |
+
author={Joseph, Sonia},
|
71 |
+
year={2024},
|
72 |
+
publisher={Prisma-Multimodal},
|
73 |
+
url={https://huggingface.co/Prisma-Multimodal},
|
74 |
+
note={Layer 6, hook_resid_post, Run ID: jffduqsa}
|
75 |
+
}
|