danielhanchen
commited on
Commit
•
0480a64
1
Parent(s):
63b9ca5
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags:
|
|
12 |
|
13 |
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
|
14 |
|
15 |
-
Directly quantized 4bit model with `bitsandbytes`.
|
16 |
|
17 |
We have a Google Colab Tesla T4 notebook for Llama-3 8b here: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
|
18 |
|
|
|
12 |
|
13 |
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
|
14 |
|
15 |
+
Directly quantized 4bit model with `bitsandbytes`. Built with Meta Llama 3
|
16 |
|
17 |
We have a Google Colab Tesla T4 notebook for Llama-3 8b here: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing
|
18 |
|