llama-2-7b-guanaco / README.md
mlabonne's picture
Update README.md
32b92e4
|
raw
history blame
830 Bytes
---
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
pipeline_tag: text-generation
---
πŸ“ [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
πŸ’» [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)
This is a Llama 2-7b model QLoRA fine-tuned (4-bit precision) on the [`mlabonne/guanaco-llama2-1k`](https://huggingface.co/datasets/mlabonne/guanaco-llama2) dataset.
It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.
You can easily import it using the `AutoModelForCausalLM` class from `transformers`:
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM("mlabonne/llama-2-7b-miniguanaco")
```