Update README.md
Browse files
README.md
CHANGED
@@ -4,14 +4,18 @@ datasets:
|
|
4 |
- timdettmers/openassistant-guanaco
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
-
Model fine-tuned in 4-bit precision using QLoRA on [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) with weights merged after training.
|
8 |
|
9 |
-
|
|
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
12 |
|
13 |
```
|
14 |
from transformers import AutoModelForCausalLM
|
15 |
|
16 |
-
model = AutoModelForCausalLM("mlabonne/llama-2-7b-
|
17 |
```
|
|
|
4 |
- timdettmers/openassistant-guanaco
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
|
|
7 |
|
8 |
+
📝 [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
|
9 |
+
💻 [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)
|
10 |
|
11 |
+
This is a Llama 2-7b model QLoRA fine-tuned (4-bit precision) on the [`mlabonne/guanaco-llama2-1k`](https://huggingface.co/datasets/mlabonne/guanaco-llama2) dataset.
|
12 |
+
|
13 |
+
It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.
|
14 |
+
|
15 |
+
You can easily import it using the `AutoModelForCausalLM` class from `transformers`:
|
16 |
|
17 |
```
|
18 |
from transformers import AutoModelForCausalLM
|
19 |
|
20 |
+
model = AutoModelForCausalLM("mlabonne/llama-2-7b-miniguanaco")
|
21 |
```
|