This Llama π¦ is stored in πͺπΊ
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
The files in this are stored in EU Region on huggingface, thanks to our new multi-region support.
- Downloads last month
- 17