File size: 2,126 Bytes
ad25609
 
 
 
 
 
75e7737
ad25609
32b92e4
 
ad25609
75e7737
32b92e4
75e7737
32b92e4
75e7737
ad25609
75e7737
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad25609
 
75e7737
1b9510c
75e7737
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
pipeline_tag: text-generation
---
# Llama-2-7b-guanaco

📝 [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
💻 [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)

<center><img src="https://i.imgur.com/C2x7n2a.png" width="300"></center>

This is a Llama 2-7b model QLoRA fine-tuned (4-bit precision) on the [`mlabonne/guanaco-llama2`](https://huggingface.co/datasets/mlabonne/guanaco-llama2) dataset.

## 🔧 Training

It was trained on a Google Colab notebook with a T4 GPU and high RAM.

## 💻 Usage

``` python
# pip install transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/llama-2-7b-miniguanaco"
prompt = "What is a large language model?"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

sequences = pipeline(
    f'<s>[INST] {prompt} [/INST]',
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
```

Output:
>A large language model is a type of artificial intelligence (AI) model that is trained to generate human-like language. The models can be trained on text from a specific genre, such as news articles, or on a large corpus of text, such as the internet. They can then be used to generate text, such as articles, stories or even entire books. These models are often used in applications such as chatbots, language translation and content generation. They have been used to write books such as: "The Last Days of New Paris" by China Miéville.
>
>The large models are also used for many other applications such as:
>
>- Translation
>- Summarization
>- Sentiment Analysis
>- Text classification
>- Generative writing (creates articles, stories, and more.)
>- Conversational language generation.