ArthurConmyGDM
commited on
Commit
•
a93866d
1
Parent(s):
5898406
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,38 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# 1. GemmaScope
|
6 |
+
|
7 |
+
Gemmascope is TODO
|
8 |
+
|
9 |
+
# 2. What Is `gemmascope-9b-pt-att`?
|
10 |
+
|
11 |
+
- `gemmascope-`: See 1.
|
12 |
+
- `9b-pt-`: These SAEs were trained on the Gemma v2 9B base model (TODO link).
|
13 |
+
- `att`: These SAEs were trained on the attention layer outputs, before the final linear projection (TODO link ckkissane post).
|
14 |
+
|
15 |
+
## 3. GTM FAQ (TODO(conmy): delete for main rollout)
|
16 |
+
|
17 |
+
Q1: Why does this model exist in `gg-hf`?
|
18 |
+
|
19 |
+
A1: See https://docs.google.com/document/d/1bKaOw2mJPJDYhgFQGGVOyBB3M4Bm_Q3PMrfQeqeYi0M (Google internal only).
|
20 |
+
|
21 |
+
Q2: What does "SAE" mean?
|
22 |
+
|
23 |
+
A2: Sparse Autoencoder. See https://docs.google.com/document/d/1roMgCPMPEQgaNbCu15CGo966xRLToulCBQUVKVGvcfM (should be available to trusted HuggingFace collaborators, and Google too).
|
24 |
+
|
25 |
+
TODO(conmy): remove this when making the main repo.
|
26 |
+
|
27 |
+
## 4. Point of Contact
|
28 |
+
|
29 |
+
Point of contact: Arthur Conmy
|
30 |
+
|
31 |
+
Contact by email:
|
32 |
+
|
33 |
+
```python
|
34 |
+
''.join(list('moc.elgoog@ymnoc')[::-1])
|
35 |
+
```
|
36 |
+
|
37 |
+
HuggingFace account:
|
38 |
+
https://huggingface.co/ArthurConmyGDM
|