Sparse Autoencoders
Collection
SAEs are tools for understanding the internal representations of neural networks. These can be loaded using https://github.com/EleutherAI/sae
•
5 items
•
Updated
These SAEs were trained on the outputs of each of the MLPs in EleutherAI/pythia-70m. We used 8.2 billion tokens from the Pile training set at a context length of 2049. The number of latents is 32,768.