Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: qwen
|
4 |
+
license_link: >-
|
5 |
+
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
- zh
|
9 |
+
library_name: transformers
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
inference: false
|
12 |
+
tags:
|
13 |
+
- llama
|
14 |
+
- qwen
|
15 |
+
- qwen1.5
|
16 |
+
- qwen2
|
17 |
+
---
|
18 |
+
This is the LLaMAfied version of [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) model by Alibaba Cloud.
|
19 |
+
The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py).
|
20 |
+
I have made modifications to make it compatible with qwen1.5.
|
21 |
+
This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/llamafy_qwen_v2.py
|
22 |
+
Usage:
|
23 |
+
```python
|
24 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
25 |
+
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen1.5-7B-Chat_llamafy")
|
26 |
+
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen1.5-7B-Chat_llamafy", torch_dtype="auto", device_map="auto")
|
27 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
28 |
+
|
29 |
+
messages = [
|
30 |
+
{"role": "user", "content": "Who are you?"}
|
31 |
+
]
|
32 |
+
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
|
33 |
+
inputs = inputs.to("cuda")
|
34 |
+
generate_ids = model.generate(inputs,max_length=2048, streamer=streamer)
|
35 |
+
```
|
36 |
+
## Test
|
37 |
+
load in 4bit
|
38 |
+
```
|
39 |
+
hf-causal (pretrained=Qwen1.5-7B-Chat), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
|
40 |
+
| Task |Version| Metric |Value | |Stderr|
|
41 |
+
|-------------|------:|--------|-----:|---|-----:|
|
42 |
+
|arc_challenge| 0|acc |0.4155|± |0.0144|
|
43 |
+
| | |acc_norm|0.4480|± |0.0145|
|
44 |
+
|truthfulqa_mc| 1|mc1 |0.3513|± |0.0167|
|
45 |
+
| | |mc2 |0.5165|± |0.0159|
|
46 |
+
|winogrande | 0|acc |0.6330|± |0.0135|
|
47 |
+
```
|
48 |
+
load in 4bit
|
49 |
+
```
|
50 |
+
hf-causal (pretrained=Qwen1.5-7B-Chat_llamafy), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
|
51 |
+
| Task |Version| Metric |Value | |Stderr|
|
52 |
+
|-------------|------:|--------|-----:|---|-----:|
|
53 |
+
|arc_challenge| 0|acc |0.4172|± |0.0144|
|
54 |
+
| | |acc_norm|0.4488|± |0.0145|
|
55 |
+
|truthfulqa_mc| 1|mc1 |0.3501|± |0.0167|
|
56 |
+
| | |mc2 |0.5164|± |0.0159|
|
57 |
+
|winogrande | 0|acc |0.6306|± |0.0136|
|
58 |
+
```
|
59 |
+
```
|