Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.
|
3 |
datasets:
|
4 |
- cerebras/SlimPajama-627B
|
5 |
- bigcode/starcoderdata
|
@@ -9,7 +9,7 @@ language:
|
|
9 |
- en
|
10 |
license: apache-2.0
|
11 |
model_creator: TinyLlama
|
12 |
-
model_name: TinyLlama-1.1B-Chat-v0.
|
13 |
pipeline_tag: text-generation
|
14 |
quantized_by: afrideva
|
15 |
tags:
|
@@ -23,19 +23,19 @@ tags:
|
|
23 |
- q6_k
|
24 |
- q8_0
|
25 |
---
|
26 |
-
# TinyLlama/TinyLlama-1.1B-Chat-v0.
|
27 |
|
28 |
-
Quantized GGUF model files for [TinyLlama-1.1B-Chat-v0.
|
29 |
|
30 |
|
31 |
| Name | Quant method | Size |
|
32 |
| ---- | ---- | ---- |
|
33 |
-
| [tinyllama-1.1b-chat-v0.
|
34 |
-
| [tinyllama-1.1b-chat-v0.
|
35 |
-
| [tinyllama-1.1b-chat-v0.
|
36 |
-
| [tinyllama-1.1b-chat-v0.
|
37 |
-
| [tinyllama-1.1b-chat-v0.
|
38 |
-
| [tinyllama-1.1b-chat-v0.
|
39 |
|
40 |
|
41 |
|
@@ -53,39 +53,43 @@ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion to
|
|
53 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
54 |
|
55 |
#### This Model
|
56 |
-
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T).
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
#### How to use
|
61 |
-
You will need the transformers>=4.
|
62 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
# pip install git+https://github.com/huggingface/transformers.git
|
67 |
-
# pip install accelerate
|
68 |
-
|
69 |
import torch
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
```
|
|
|
1 |
---
|
2 |
+
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.5
|
3 |
datasets:
|
4 |
- cerebras/SlimPajama-627B
|
5 |
- bigcode/starcoderdata
|
|
|
9 |
- en
|
10 |
license: apache-2.0
|
11 |
model_creator: TinyLlama
|
12 |
+
model_name: TinyLlama-1.1B-Chat-v0.5
|
13 |
pipeline_tag: text-generation
|
14 |
quantized_by: afrideva
|
15 |
tags:
|
|
|
23 |
- q6_k
|
24 |
- q8_0
|
25 |
---
|
26 |
+
# TinyLlama/TinyLlama-1.1B-Chat-v0.5-GGUF
|
27 |
|
28 |
+
Quantized GGUF model files for [TinyLlama-1.1B-Chat-v0.5](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.5) from [TinyLlama](https://huggingface.co/TinyLlama)
|
29 |
|
30 |
|
31 |
| Name | Quant method | Size |
|
32 |
| ---- | ---- | ---- |
|
33 |
+
| [tinyllama-1.1b-chat-v0.5.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.5-GGUF/resolve/main/tinyllama-1.1b-chat-v0.5.q2_k.gguf) | q2_k | 482.15 MB |
|
34 |
+
| [tinyllama-1.1b-chat-v0.5.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.5-GGUF/resolve/main/tinyllama-1.1b-chat-v0.5.q3_k_m.gguf) | q3_k_m | 549.85 MB |
|
35 |
+
| [tinyllama-1.1b-chat-v0.5.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.5-GGUF/resolve/main/tinyllama-1.1b-chat-v0.5.q4_k_m.gguf) | q4_k_m | 667.82 MB |
|
36 |
+
| [tinyllama-1.1b-chat-v0.5.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.5-GGUF/resolve/main/tinyllama-1.1b-chat-v0.5.q5_k_m.gguf) | q5_k_m | 782.05 MB |
|
37 |
+
| [tinyllama-1.1b-chat-v0.5.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.5-GGUF/resolve/main/tinyllama-1.1b-chat-v0.5.q6_k.gguf) | q6_k | 903.42 MB |
|
38 |
+
| [tinyllama-1.1b-chat-v0.5.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.5-GGUF/resolve/main/tinyllama-1.1b-chat-v0.5.q8_0.gguf) | q8_0 | 1.17 GB |
|
39 |
|
40 |
|
41 |
|
|
|
53 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
54 |
|
55 |
#### This Model
|
56 |
+
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T).
|
57 |
+
The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format.
|
|
|
|
|
58 |
#### How to use
|
59 |
+
You will need the transformers>=4.31
|
60 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
61 |
+
```
|
62 |
+
from transformers import AutoTokenizer
|
63 |
+
import transformers
|
|
|
|
|
|
|
64 |
import torch
|
65 |
+
model = "PY007/TinyLlama-1.1B-Chat-v0.5"
|
66 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
67 |
+
pipeline = transformers.pipeline(
|
68 |
+
"text-generation",
|
69 |
+
model=model,
|
70 |
+
torch_dtype=torch.float16,
|
71 |
+
device_map="auto",
|
72 |
+
)
|
73 |
+
|
74 |
+
CHAT_EOS_TOKEN_ID = 32002
|
75 |
+
|
76 |
+
prompt = "How to get in a good university?"
|
77 |
+
formatted_prompt = (
|
78 |
+
f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
|
79 |
+
)
|
80 |
+
|
81 |
+
|
82 |
+
sequences = pipeline(
|
83 |
+
formatted_prompt,
|
84 |
+
do_sample=True,
|
85 |
+
top_k=50,
|
86 |
+
top_p = 0.9,
|
87 |
+
num_return_sequences=1,
|
88 |
+
repetition_penalty=1.1,
|
89 |
+
max_new_tokens=1024,
|
90 |
+
eos_token_id=CHAT_EOS_TOKEN_ID,
|
91 |
+
)
|
92 |
+
|
93 |
+
for seq in sequences:
|
94 |
+
print(f"Result: {seq['generated_text']}")
|
95 |
```
|