Minami-su commited on
Commit
0a60f61
1 Parent(s): 992635f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -1
README.md CHANGED
@@ -1,3 +1,64 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: qwen
4
+ license_link: >-
5
+ https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
6
+ language:
7
+ - en
8
+ - zh
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ inference: false
12
+ tags:
13
+ - llama
14
+ - qwen
15
+ - qwen1.5
16
+ - qwen2
17
  ---
18
+ This is the Mistral version of [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) model by Alibaba Cloud.
19
+ The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py).
20
+ I have made modifications to make it compatible with qwen1.5.
21
+ This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/mistral_qwen2.py
22
+
23
+ Usage:
24
+
25
+ ```python
26
+
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
28
+ tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_mistral")
29
+ model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen1.5-0.5B-Chat_mistral", torch_dtype="auto", device_map="auto")
30
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
31
+
32
+ messages = [
33
+ {"role": "user", "content": "Who are you?"}
34
+ ]
35
+ inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
36
+ inputs = inputs.to("cuda")
37
+ generate_ids = model.generate(inputs,max_length=2048, streamer=streamer)
38
+
39
+ ```
40
+
41
+ ## Test
42
+ load in 4bit
43
+ ```
44
+ hf-causal (pretrained=Qwen1.5-0.5B-Chat), limit: None, provide_description: False, num_fewshot: 0, batch_size: 32
45
+ | Task |Version| Metric |Value | |Stderr|
46
+ |-------------|------:|--------|-----:|---|-----:|
47
+ |arc_challenge| 0|acc |0.2389|± |0.0125|
48
+ | | |acc_norm|0.2688|± |0.0130|
49
+ |truthfulqa_mc| 1|mc1 |0.2534|± |0.0152|
50
+ | | |mc2 |0.4322|± |0.0151|
51
+ |winogrande | 0|acc |0.5564|± |0.0140|
52
+ ```
53
+ load in 4bit
54
+ ```
55
+ hf-causal (pretrained=Qwen1.5-0.5B-Chat_mistral), limit: None, provide_description: False, num_fewshot: 0, batch_size: 32
56
+ | Task |Version| Metric |Value | |Stderr|
57
+ |-------------|------:|--------|-----:|---|-----:|
58
+ |arc_challenge| 0|acc |0.2398|± |0.0125|
59
+ | | |acc_norm|0.2705|± |0.0130|
60
+ |truthfulqa_mc| 1|mc1 |0.2534|± |0.0152|
61
+ | | |mc2 |0.4322|± |0.0151|
62
+ |winogrande | 0|acc |0.5549|± |0.0140|
63
+ ```
64
+ ```