tinyllama / README.md
shimmyshimmer's picture
Update README.md
db98d3a verified
|
raw
history blame
936 Bytes
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - unsloth
  - transformers
  - tinyllama

Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!

A reupload from https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T

We have a Google Colab Tesla T4 notebook for TinyLlama with 4096 max sequence length RoPE Scaling here: https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing