Xfgll commited on
Commit
241bf4c
1 Parent(s): 0a4c066

Upload 19 files

Browse files
README.md ADDED
@@ -0,0 +1,656 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ tags:
6
+ - qwen
7
+ pipeline_tag: text-generation
8
+ inference: false
9
+ ---
10
+
11
+ # Qwen-7B-Chat
12
+
13
+ <p align="center">
14
+ <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/>
15
+ <p>
16
+ <br>
17
+
18
+ <p align="center">
19
+ 🤗 <a href="https://huggingface.co/Qwen">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/qwen">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2309.16609">Paper</a> &nbsp&nbsp | &nbsp&nbsp🖥️ <a href="https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary">Demo</a>
20
+ <br>
21
+ <a href="assets/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>&nbsp&nbsp | &nbsp&nbsp<a href="https://dashscope.aliyun.com">API</a>
22
+ </p>
23
+ <br>
24
+
25
+
26
+ ## 介绍(Introduction)
27
+
28
+ **通义千问-7B(Qwen-7B)**是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-7B的基础上,我们使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。相较于最初开源的Qwen-7B模型,我们现已将预训练模型和Chat模型更新到效果更优的版本。本仓库为Qwen-7B-Chat的仓库。
29
+
30
+ 如果您想了解更多关于通义千问-7B开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/QwenLM/Qwen)。
31
+
32
+ **Qwen-7B** is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. Now we have updated both our pretrained and chat models with better performances. This repository is the one for Qwen-7B-Chat.
33
+
34
+ For more details about Qwen, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.
35
+ <br>
36
+
37
+ ## 要求(Requirements)
38
+
39
+ * python 3.8及以上版本
40
+ * pytorch 1.12及以上版本,推荐2.0及以上版本
41
+ * 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
42
+ * python 3.8 and above
43
+ * pytorch 1.12 and above, 2.0 and above are recommended
44
+ * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
45
+ <br>
46
+
47
+ ## 依赖项(Dependency)
48
+
49
+ 运行Qwen-7B-Chat,请确保满足上述要求,再执行以下pip命令安装依赖库
50
+
51
+ To run Qwen-7B-Chat, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries.
52
+
53
+ ```bash
54
+ pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
55
+ ```
56
+
57
+ 另外,推荐安装`flash-attention`库(**当前已支持flash attention 2**),以实现更高的效率和更低的显存占用。
58
+
59
+ In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage.
60
+
61
+ ```bash
62
+ git clone https://github.com/Dao-AILab/flash-attention
63
+ cd flash-attention && pip install .
64
+ # 下方安装可选,安装可能比较缓慢。
65
+ # pip install csrc/layer_norm
66
+ # pip install csrc/rotary
67
+ ```
68
+ <br>
69
+
70
+ ## 快速使用(Quickstart)
71
+
72
+ 下面我们展示了一个使用Qwen-7B-Chat模型,进行多轮对话交互的样例:
73
+
74
+ We show an example of multi-turn interaction with Qwen-7B-Chat in the following code:
75
+
76
+ ```python
77
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
78
+ from modelscope import GenerationConfig
79
+
80
+ # Note: The default behavior now has injection attack prevention off.
81
+ tokenizer = AutoTokenizer.from_pretrained("qwen/Qwen-7B-Chat", trust_remote_code=True)
82
+
83
+ # use bf16
84
+ # model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
85
+ # use fp16
86
+ # model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
87
+ # use cpu only
88
+ # model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="cpu", trust_remote_code=True).eval()
89
+ # use auto mode, automatically select precision based on the device.
90
+ model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval()
91
+
92
+ # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
93
+ # model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
94
+
95
+ # 第一轮对话 1st dialogue turn
96
+ response, history = model.chat(tokenizer, "你好", history=None)
97
+ print(response)
98
+ # 你���!很高兴为你提供帮助。
99
+
100
+ # 第二轮对话 2nd dialogue turn
101
+ response, history = model.chat(tokenizer, "给我讲一个年轻人奋斗创业最终取得成功的故事。", history=history)
102
+ print(response)
103
+ # 这是一个关于一个年轻人奋斗创业最终取得成功的故事。
104
+ # 故事的主人公叫李明,他来自一个普通的家庭,父母都是普通的工人。从小,李明就立下了一个目标:要成为一名成功的企业家。
105
+ # 为了实现这个目标,李明勤奋学习,考上了大学。在大学期间,他积极参加各种创业比赛,获得了不少奖项。他还利用课余时间去实习,积累了宝贵的经验。
106
+ # 毕业后,李明决定开始自己的创业之路。他开始寻找投资机会,但多次都被拒绝了。然而,他并没有放弃。他继续努力,不断改进自己的创业计划,并寻找新的投资机会。
107
+ # 最终,李明成功地获得了一笔投资,开始了自己的创业之路。他成立了一家科技公司,专注于开发新型软件。在他的领导下,公司迅速发展起来,成为了一家成功的科技企业。
108
+ # 李明的成功并不是偶然的。他勤奋、坚韧、勇于冒险,不断学习和改进自己。他的成功也证明了,只要努力奋斗,任何人都有可能取得成功。
109
+
110
+ # 第三轮对话 3rd dialogue turn
111
+ response, history = model.chat(tokenizer, "给这个故事起一个标题", history=history)
112
+ print(response)
113
+ # 《奋斗创业:一个年轻人的成功之路》
114
+ ```
115
+
116
+ 关于更多的使用说明,请参考我们的[GitHub repo](https://github.com/QwenLM/Qwen)获取更多信息。
117
+
118
+ For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information.
119
+ <br>
120
+
121
+ ## Tokenizer
122
+
123
+ > 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。
124
+
125
+ 基于tiktoken的分词器有别于其他分词器,比如sentencepiece分词器。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅[文档](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)。
126
+
127
+ Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md).
128
+ <br>
129
+
130
+ ## 量化 (Quantization)
131
+
132
+ ### 用法 (Usage)
133
+
134
+ **请注意:我们更新量化方案为基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化,提供Qwen-7B-Chat的Int4量化模型[点击这里](https://huggingface.co/Qwen/Qwen-7B-Chat-Int4)。相比此前方案,该方案在模型评测效果几乎无损,且存储需求更低,推理速度更优。**
135
+
136
+ **Note: we provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-7B-Chat [Click here](https://huggingface.co/Qwen/Qwen-7B-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed, in comparison with the previous solution.**
137
+
138
+ 以下我们提供示例说明如何使用Int4量化模型。在开始使用前,请先保证满足要求(如torch 2.0及以上,transformers版本为4.32.0及以上,等等),并安装所需安装包:
139
+
140
+ Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements of auto-gptq (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages:
141
+
142
+ ```bash
143
+ pip install auto-gptq optimum
144
+ ```
145
+
146
+ 如安装`auto-gptq`遇到问题,我们建议您到官方[repo](https://github.com/PanQiWei/AutoGPTQ)搜索合适的预编译wheel。
147
+
148
+ 随后即可使用和上述一致的用法调用量化模型:
149
+
150
+ If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel.
151
+
152
+ Then you can load the quantized model easily and run inference as same as usual:
153
+
154
+ ```python
155
+ model = AutoModelForCausalLM.from_pretrained(
156
+ "Qwen/Qwen-7B-Chat-Int4",
157
+ device_map="auto",
158
+ trust_remote_code=True
159
+ ).eval()
160
+ response, history = model.chat(tokenizer, "你好", history=None)
161
+ ```
162
+
163
+
164
+
165
+ ### 效果评测
166
+
167
+ 我们对BF16,Int8和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示:
168
+
169
+ We illustrate the zero-shot performance of both BF16, Int8 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
170
+
171
+ | Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
172
+ | ------------- | :--------: | :----------: | :----: | :--------: |
173
+ | BF16 | 55.8 | 59.7 | 50.3 | 37.2 |
174
+ | Int8 | 55.4 | 59.4 | 48.3 | 34.8 |
175
+ | Int4 | 55.1 | 59.2 | 49.7 | 29.9 |
176
+
177
+ ### 推理速度 (Inference Speed)
178
+
179
+ 我们测算了不同精度模型以及不同FlashAttn库版本下模型生成2048和8192个token的平均推理速度。如图所示:
180
+
181
+ We measured the average inference speed of generating 2048 and 8192 tokens with different quantization levels and versions of flash-attention, respectively.
182
+
183
+ | Quantization | FlashAttn | Speed (2048 tokens) | Speed (8192 tokens) |
184
+ | ------------- | :-------: | :------------------:| :------------------:|
185
+ | BF16 | v2 | 40.93 | 36.14 |
186
+ | Int8 | v2 | 37.47 | 32.54 |
187
+ | Int4 | v2 | 50.09 | 38.61 |
188
+ | BF16 | v1 | 40.75 | 35.34 |
189
+ | Int8 | v1 | 37.51 | 32.39 |
190
+ | Int4 | v1 | 45.98 | 36.47 |
191
+ | BF16 | Disabled | 37.55 | 33.56 |
192
+ | Int8 | Disabled | 37.84 | 32.65 |
193
+ | Int4 | Disabled | 48.12 | 36.70 |
194
+
195
+ 具体而言,我们记录在长度为1的上下文的条件下生成8192个token的性能。评测运行于单张A100-SXM4-80G GPU,使用PyTorch 2.0.1和CUDA 11.8。推理速度是生成8192个token的速度均值。
196
+
197
+ In detail, the setting of profiling is generating 8192 new tokens with 1 context token. The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.8. The inference speed is averaged over the generated 8192 tokens.
198
+
199
+ 注意:以上Int4/Int8模型生成速度使用autogptq库给出,当前``AutoModelForCausalLM.from_pretrained``载入的模型生成速度会慢大约20%。我们已经将该问题汇报给HuggingFace团队,若有解决方案将即时更新。
200
+
201
+ Note: The generation speed of the Int4/Int8 models mentioned above is provided by the autogptq library. The current speed of the model loaded using "AutoModelForCausalLM.from_pretrained" will be approximately 20% slower. We have reported this issue to the HuggingFace team and will update it promptly if a solution is available.
202
+
203
+ ### 显存使用 (GPU Memory Usage)
204
+
205
+ 我们还测算了不同模型精度编码2048个token及生成8192个token的峰值显存占用情况。(显存消耗在是否使用FlashAttn的情况下均类似。)结果如下所示:
206
+
207
+ We also profile the peak GPU memory usage for encoding 2048 tokens as context (and generating single token) and generating 8192 tokens (with single token as context) under different quantization levels, respectively. (The GPU memory usage is similar when using flash-attention or not.)The results are shown below.
208
+
209
+ | Quantization Level | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
210
+ | ------------------ | :---------------------------------: | :-----------------------------------: |
211
+ | BF16 | 16.99GB | 22.53GB |
212
+ | Int8 | 11.20GB | 16.62GB |
213
+ | Int4 | 8.21GB | 13.63GB |
214
+
215
+ 上述性能测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)完成。
216
+
217
+ The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py).
218
+ <br>
219
+
220
+ ## 模型细节(Model)
221
+
222
+ 与Qwen-7B预训练模型相同,Qwen-7B-Chat模型规模基本情况如下所示:
223
+
224
+ The details of the model architecture of Qwen-7B-Chat are listed as follows:
225
+
226
+ | Hyperparameter | Value |
227
+ |:----------------|:------:|
228
+ | n_layers | 32 |
229
+ | n_heads | 32 |
230
+ | d_model | 4096 |
231
+ | vocab size | 151851 |
232
+ | sequence length | 8192 |
233
+
234
+ 在位置编码、FFN激活函数和normalization的实现方式上,我们也采用了目前最流行的做法,
235
+ 即RoPE相对位置编码、SwiGLU激活函数、RMSNorm(可选安装flash-attention加速)。
236
+
237
+ 在分词器方面,相比目前主流开源模型以中英词表为主,Qwen-7B-Chat使用了约15万token大小的词表。
238
+ 该词表在GPT-4使用的BPE词表`cl100k_base`基础上,对中文、多语言进行了优化,在对中、英、代码数据的高效编解码的基础上,对部分多语言更加友好,方便用户在不扩展词表的情况下对部分语种进行能力增强。
239
+ 词表对数字按单个数字位切分。调用较为高效的[tiktoken分词库](https://github.com/openai/tiktoken)进行分词。
240
+
241
+ For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
242
+
243
+ For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-7B-Chat uses a vocabulary of over 150K tokens.
244
+ It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary.
245
+ It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
246
+ <br>
247
+
248
+ ## 评测效果(Evaluation)
249
+
250
+ 对于Qwen-7B-Chat模型,我们同样评测了常规的中文理解(C-Eval)、英文理解(MMLU)、代码(HumanEval)和数学(GSM8K)等权威任务,同时包含了长序列任务的评测结果。由于Qwen-7B-Chat模型经过对齐后,激发了较强的外部系统调用能力,我们还进行了工具使用能力方面的评测。
251
+
252
+ 提示:由于硬件和框架造成的舍入误差,复现结果如有波动属于正常现象。
253
+
254
+ For Qwen-7B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage.
255
+
256
+ Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible.
257
+
258
+ ### 中文评测(Chinese Evaluation)
259
+
260
+ #### C-Eval
261
+
262
+ 在[C-Eval](https://arxiv.org/abs/2305.08322)验证集上,我们评价了Qwen-7B-Chat模型的0-shot & 5-shot准确率
263
+
264
+ We demonstrate the 0-shot & 5-shot accuracy of Qwen-7B-Chat on C-Eval validation set
265
+
266
+ | Model | Avg. Acc. |
267
+ |:--------------------------------:|:---------:|
268
+ | LLaMA2-7B-Chat | 31.9 |
269
+ | LLaMA2-13B-Chat | 36.2 |
270
+ | LLaMA2-70B-Chat | 44.3 |
271
+ | ChatGLM2-6B-Chat | 52.6 |
272
+ | InternLM-7B-Chat | 53.6 |
273
+ | Baichuan2-7B-Chat | 55.6 |
274
+ | Baichuan2-13B-Chat | 56.7 |
275
+ | Qwen-7B-Chat (original) (0-shot) | 54.2 |
276
+ | **Qwen-7B-Chat (0-shot)** | 59.7 |
277
+ | **Qwen-7B-Chat (5-shot)** | 59.3 |
278
+ | **Qwen-14B-Chat (0-shot)** | 69.8 |
279
+ | **Qwen-14B-Chat (5-shot)** | **71.7** |
280
+
281
+ C-Eval测试集上,Qwen-7B-Chat模型的zero-shot准确率结果如下:
282
+
283
+ The zero-shot accuracy of Qwen-7B-Chat on C-Eval testing set is provided below:
284
+
285
+ | Model | Avg. | STEM | Social Sciences | Humanities | Others |
286
+ | :---------------------- | :------: | :--: | :-------------: | :--------: | :----: |
287
+ | Chinese-Alpaca-Plus-13B | 41.5 | 36.6 | 49.7 | 43.1 | 41.2 |
288
+ | Chinese-Alpaca-2-7B | 40.3 | - | - | - | - |
289
+ | ChatGLM2-6B-Chat | 50.1 | 46.4 | 60.4 | 50.6 | 46.9 |
290
+ | Baichuan-13B-Chat | 51.5 | 43.7 | 64.6 | 56.2 | 49.2 |
291
+ | Qwen-7B-Chat (original) | 54.6 | 47.8 | 67.6 | 59.3 | 50.6 |
292
+ | **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 |
293
+ | **Qwen-14B-Chat** | **69.1** | 65.1 | 80.9 | 71.2 | 63.4 |
294
+
295
+ 在7B规模模型上,经过人类指令对齐的Qwen-7B-Chat模型,准确率在同类相近规模模型中仍然处于前列。
296
+
297
+ Compared with other pretrained models with comparable model size, the human-aligned Qwen-7B-Chat performs well in C-Eval accuracy.
298
+
299
+ ### 英文评测(English Evaluation)
300
+
301
+ #### MMLU
302
+
303
+ [MMLU](https://arxiv.org/abs/2009.03300)评测集上,Qwen-7B-Chat模型的 0-shot & 5-shot 准确率如下,效果同样在同类对齐模型中同样表现较优。
304
+
305
+ The 0-shot & 5-shot accuracy of Qwen-7B-Chat on MMLU is provided below.
306
+ The performance of Qwen-7B-Chat still on the top between other human-aligned models with comparable size.
307
+
308
+ | Model | Avg. Acc. |
309
+ |:--------------------------------:|:---------:|
310
+ | ChatGLM2-6B-Chat | 46.0 |
311
+ | LLaMA2-7B-Chat | 46.2 |
312
+ | InternLM-7B-Chat | 51.1 |
313
+ | Baichuan2-7B-Chat | 52.9 |
314
+ | LLaMA2-13B-Chat | 54.6 |
315
+ | Baichuan2-13B-Chat | 57.3 |
316
+ | LLaMA2-70B-Chat | 63.8 |
317
+ | Qwen-7B-Chat (original) (0-shot) | 53.9 |
318
+ | **Qwen-7B-Chat (0-shot)** | 55.8 |
319
+ | **Qwen-7B-Chat (5-shot)** | 57.0 |
320
+ | **Qwen-14B-Chat (0-shot)** | 64.6 |
321
+ | **Qwen-14B-Chat (5-shot)** | **66.5** |
322
+
323
+ ### 代码评测(Coding Evaluation)
324
+
325
+ Qwen-7B-Chat在[HumanEval](https://github.com/openai/human-eval)的zero-shot Pass@1效果如下
326
+
327
+ The zero-shot Pass@1 of Qwen-7B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below
328
+
329
+ | Model | Pass@1 |
330
+ |:-----------------------:|:--------:|
331
+ | ChatGLM2-6B-Chat | 11.0 |
332
+ | LLaMA2-7B-Chat | 12.2 |
333
+ | Baichuan2-7B-Chat | 13.4 |
334
+ | InternLM-7B-Chat | 14.6 |
335
+ | Baichuan2-13B-Chat | 17.7 |
336
+ | LLaMA2-13B-Chat | 18.9 |
337
+ | LLaMA2-70B-Chat | 32.3 |
338
+ | Qwen-7B-Chat (original) | 24.4 |
339
+ | **Qwen-7B-Chat** | 37.2 |
340
+ | **Qwen-14B-Chat** | **43.9** |
341
+
342
+ ### 数学评测(Mathematics Evaluation)
343
+
344
+ 在评测数学能力的[GSM8K](https://github.com/openai/grade-school-math)上,Qwen-7B-Chat的准确率结果如下
345
+
346
+ The accuracy of Qwen-7B-Chat on GSM8K is shown below
347
+
348
+ | Model | Acc. |
349
+ |:------------------------------------:|:--------:|
350
+ | LLaMA2-7B-Chat | 26.3 |
351
+ | ChatGLM2-6B-Chat | 28.8 |
352
+ | Baichuan2-7B-Chat | 32.8 |
353
+ | InternLM-7B-Chat | 33.0 |
354
+ | LLaMA2-13B-Chat | 37.1 |
355
+ | Baichuan2-13B-Chat | 55.3 |
356
+ | LLaMA2-70B-Chat | 59.3 |
357
+ | **Qwen-7B-Chat (original) (0-shot)** | 41.1 |
358
+ | **Qwen-7B-Chat (0-shot)** | 50.3 |
359
+ | **Qwen-7B-Chat (8-shot)** | 54.1 |
360
+ | **Qwen-14B-Chat (0-shot)** | **60.1** |
361
+ | **Qwen-14B-Chat (8-shot)** | 59.3 |
362
+
363
+ ### 长序列评测(Long-Context Understanding)
364
+
365
+ 通过NTK插值,LogN注意力缩放可以扩展Qwen-7B-Chat的上下文长度。在长文本摘要数据集[VCSUM](https://arxiv.org/abs/2305.05280)上(文本平均长度在15K左右),Qwen-7B-Chat的Rouge-L结果如下:
366
+
367
+ **(若要启用这些技巧,请将config.json里的`use_dynamic_ntk`和`use_logn_attn`设置为true)**
368
+
369
+ We introduce NTK-aware interpolation, LogN attention scaling to extend the context length of Qwen-7B-Chat. The Rouge-L results of Qwen-7B-Chat on long-text summarization dataset [VCSUM](https://arxiv.org/abs/2305.05280) (The average length of this dataset is around 15K) are shown below:
370
+
371
+ **(To use these tricks, please set `use_dynamic_ntk` and `use_long_attn` to true in config.json.)**
372
+
373
+ | Model | VCSUM (zh) |
374
+ |:------------------|:----------:|
375
+ | GPT-3.5-Turbo-16k | 16.0 |
376
+ | LLama2-7B-Chat | 0.2 |
377
+ | InternLM-7B-Chat | 13.0 |
378
+ | ChatGLM2-6B-Chat | 16.3 |
379
+ | **Qwen-7B-Chat** | **16.6** |
380
+
381
+ ### 工具使用能力的评测(Tool Usage)
382
+
383
+ #### ReAct Prompting
384
+
385
+ 千问支持通过 [ReAct Prompting](https://arxiv.org/abs/2210.03629) 调用插件/工具/API。ReAct 也是 [LangChain](https://python.langchain.com/) 框架采用的主要方式之一。在我们开源的、用于评估工具使用能力的评测基准上,千问的表现如下:
386
+
387
+ Qwen-Chat supports calling plugins/tools/APIs through [ReAct Prompting](https://arxiv.org/abs/2210.03629). ReAct is also one of the main approaches used by the [LangChain](https://python.langchain.com/) framework. In our evaluation benchmark for assessing tool usage capabilities, Qwen-Chat's performance is as follows:
388
+
389
+ <table>
390
+ <tr>
391
+ <th colspan="4" align="center">Chinese Tool-Use Benchmark</th>
392
+ </tr>
393
+ <tr>
394
+ <th align="center">Model</th><th align="center">Tool Selection (Acc.↑)</th><th align="center">Tool Input (Rouge-L↑)</th><th align="center">False Positive Error↓</th>
395
+ </tr>
396
+ <tr>
397
+ <td>GPT-4</td><td align="center">95%</td><td align="center">0.90</td><td align="center">15.0%</td>
398
+ </tr>
399
+ <tr>
400
+ <td>GPT-3.5</td><td align="center">85%</td><td align="center">0.88</td><td align="center">75.0%</td>
401
+ </tr>
402
+ <tr>
403
+ <td>Qwen-7B-Chat</td><td align="center">98%</td><td align="center">0.91</td><td align="center">7.3%</td>
404
+ </tr>
405
+ <tr>
406
+ <td>Qwen-14B-Chat</td><td align="center">98%</td><td align="center">0.93</td><td align="center">2.4%</td>
407
+ </tr>
408
+ </table>
409
+
410
+ > 评测基准中出现的插件均没有出现在千问的训练集中。该基准评估了模型在多个候选插件中选择正确插件的准确率、传入插件的参数的合理性、以及假阳率。假阳率(False Positive)定义:在处理不该调用插件的请求时,错误地调用了插件。
411
+
412
+ > The plugins that appear in the evaluation set do not appear in the training set of Qwen. This benchmark evaluates the accuracy of the model in selecting the correct plugin from multiple candidate plugins, the rationality of the parameters passed into the plugin, and the false positive rate. False Positive: Incorrectly invoking a plugin when it should not have been called when responding to a query.
413
+
414
+ ![](assets/react_showcase_001.png)
415
+ ![](assets/react_showcase_002.png)
416
+
417
+ #### Code Interpreter
418
+
419
+ 为了考察Qwen使用Python Code Interpreter完成数学解题、数据可视化、及文件处理与爬虫等任务的能力,我们专门建设并开源了一个评测这方面能力的[评测基准](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark)。
420
+
421
+ 我们发现Qwen在生成代码的可执行率、结果正确性上均表现较好:
422
+
423
+ To assess Qwen's ability to use the Python Code Interpreter for tasks such as mathematical problem solving, data visualization, and other general-purpose tasks such as file handling and web scraping, we have created and open-sourced a benchmark specifically designed for evaluating these capabilities. You can find the benchmark at this [link](https://github.com/QwenLM/Qwen-Agent/tree/main/benchmark).
424
+
425
+ We have observed that Qwen performs well in terms of code executability and result accuracy when generating code:
426
+
427
+ <table>
428
+ <tr>
429
+ <th colspan="4" align="center">Executable Rate of Generated Code (%)</th>
430
+ </tr>
431
+ <tr>
432
+ <th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization↑</th><th align="center">General↑</th>
433
+ </tr>
434
+ <tr>
435
+ <td>GPT-4</td><td align="center">91.9</td><td align="center">85.9</td><td align="center">82.8</td>
436
+ </tr>
437
+ <tr>
438
+ <td>GPT-3.5</td><td align="center">89.2</td><td align="center">65.0</td><td align="center">74.1</td>
439
+ </tr>
440
+ <tr>
441
+ <td>LLaMA2-7B-Chat</td>
442
+ <td align="center">41.9</td>
443
+ <td align="center">33.1</td>
444
+ <td align="center">24.1 </td>
445
+ </tr>
446
+ <tr>
447
+ <td>LLaMA2-13B-Chat</td>
448
+ <td align="center">50.0</td>
449
+ <td align="center">40.5</td>
450
+ <td align="center">48.3 </td>
451
+ </tr>
452
+ <tr>
453
+ <td>CodeLLaMA-7B-Instruct</td>
454
+ <td align="center">85.1</td>
455
+ <td align="center">54.0</td>
456
+ <td align="center">70.7 </td>
457
+ </tr>
458
+ <tr>
459
+ <td>CodeLLaMA-13B-Instruct</td>
460
+ <td align="center">93.2</td>
461
+ <td align="center">55.8</td>
462
+ <td align="center">74.1 </td>
463
+ </tr>
464
+ <tr>
465
+ <td>InternLM-7B-Chat-v1.1</td>
466
+ <td align="center">78.4</td>
467
+ <td align="center">44.2</td>
468
+ <td align="center">62.1 </td>
469
+ </tr>
470
+ <tr>
471
+ <td>InternLM-20B-Chat</td>
472
+ <td align="center">70.3</td>
473
+ <td align="center">44.2</td>
474
+ <td align="center">65.5 </td>
475
+ </tr>
476
+ <tr>
477
+ <td>Qwen-7B-Chat</td>
478
+ <td align="center">82.4</td>
479
+ <td align="center">64.4</td>
480
+ <td align="center">67.2 </td>
481
+ </tr>
482
+ <tr>
483
+ <td>Qwen-14B-Chat</td>
484
+ <td align="center">89.2</td>
485
+ <td align="center">84.1</td>
486
+ <td align="center">65.5</td>
487
+ </tr>
488
+ </table>
489
+
490
+ <table>
491
+ <tr>
492
+ <th colspan="4" align="center">Accuracy of Code Execution Results (%)</th>
493
+ </tr>
494
+ <tr>
495
+ <th align="center">Model</th><th align="center">Math↑</th><th align="center">Visualization-Hard↑</th><th align="center">Visualization-Easy↑</th>
496
+ </tr>
497
+ <tr>
498
+ <td>GPT-4</td><td align="center">82.8</td><td align="center">66.7</td><td align="center">60.8</td>
499
+ </tr>
500
+ <tr>
501
+ <td>GPT-3.5</td><td align="center">47.3</td><td align="center">33.3</td><td align="center">55.7</td>
502
+ </tr>
503
+ <tr>
504
+ <td>LLaMA2-7B-Chat</td>
505
+ <td align="center">3.9</td>
506
+ <td align="center">14.3</td>
507
+ <td align="center">39.2 </td>
508
+ </tr>
509
+ <tr>
510
+ <td>LLaMA2-13B-Chat</td>
511
+ <td align="center">8.3</td>
512
+ <td align="center">8.3</td>
513
+ <td align="center">40.5 </td>
514
+ </tr>
515
+ <tr>
516
+ <td>CodeLLaMA-7B-Instruct</td>
517
+ <td align="center">14.3</td>
518
+ <td align="center">26.2</td>
519
+ <td align="center">60.8 </td>
520
+ </tr>
521
+ <tr>
522
+ <td>CodeLLaMA-13B-Instruct</td>
523
+ <td align="center">28.2</td>
524
+ <td align="center">27.4</td>
525
+ <td align="center">62.0 </td>
526
+ </tr>
527
+ <tr>
528
+ <td>InternLM-7B-Chat-v1.1</td>
529
+ <td align="center">28.5</td>
530
+ <td align="center">4.8</td>
531
+ <td align="center">40.5 </td>
532
+ </tr>
533
+ <tr>
534
+ <td>InternLM-20B-Chat</td>
535
+ <td align="center">34.6</td>
536
+ <td align="center">21.4</td>
537
+ <td align="center">45.6 </td>
538
+ </tr>
539
+ <tr>
540
+ <td>Qwen-7B-Chat</td>
541
+ <td align="center">41.9</td>
542
+ <td align="center">40.5</td>
543
+ <td align="center">54.4 </td>
544
+ </tr>
545
+ <tr>
546
+ <td>Qwen-14B-Chat</td>
547
+ <td align="center">58.4</td>
548
+ <td align="center">53.6</td>
549
+ <td align="center">59.5</td>
550
+ </tr>
551
+ </table>
552
+
553
+ <p align="center">
554
+ <br>
555
+ <img src="assets/code_interpreter_showcase_001.jpg" />
556
+ <br>
557
+ <p>
558
+
559
+ #### Huggingface Agent
560
+
561
+ 千问还具备作为 [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents) 的能力。它在 Huggingface 提供的run模式评测基准上的表现如下:
562
+
563
+ Qwen-Chat also has the capability to be used as a [HuggingFace Agent](https://huggingface.co/docs/transformers/transformers_agents). Its performance on the run-mode benchmark provided by HuggingFace is as follows:
564
+
565
+ <table>
566
+ <tr>
567
+ <th colspan="4" align="center">HuggingFace Agent Benchmark- Run Mode</th>
568
+ </tr>
569
+ <tr>
570
+ <th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th>
571
+ </tr>
572
+ <tr>
573
+ <td>GPT-4</td><td align="center">100</td><td align="center">100</td><td align="center">97.4</td>
574
+ </tr>
575
+ <tr>
576
+ <td>GPT-3.5</td><td align="center">95.4</td><td align="center">96.3</td><td align="center">87.0</td>
577
+ </tr>
578
+ <tr>
579
+ <td>StarCoder-Base-15B</td><td align="center">86.1</td><td align="center">87.0</td><td align="center">68.9</td>
580
+ </tr>
581
+ <tr>
582
+ <td>StarCoder-15B</td><td align="center">87.0</td><td align="center">88.0</td><td align="center">68.9</td>
583
+ </tr>
584
+ <tr>
585
+ <td>Qwen-7B-Chat</td><td align="center">87.0</td><td align="center">87.0</td><td align="center">71.5</td>
586
+ </tr>
587
+ <tr>
588
+ <td>Qwen-14B-Chat</td><td align="center">93.5</td><td align="center">94.4</td><td align="center">87.0</td>
589
+ </tr>
590
+ </table>
591
+
592
+ <table>
593
+ <tr>
594
+ <th colspan="4" align="center">HuggingFace Agent Benchmark - Chat Mode</th>
595
+ </tr>
596
+ <tr>
597
+ <th align="center">Model</th><th align="center">Tool Selection↑</th><th align="center">Tool Used↑</th><th align="center">Code↑</th>
598
+ </tr>
599
+ <tr>
600
+ <td>GPT-4</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">98.5</td>
601
+ </tr>
602
+ <tr>
603
+ <td>GPT-3.5</td><td align="center">97.3</td><td align="center">96.8</td><td align="center">89.6</td>
604
+ </tr>
605
+ <tr>
606
+ <td>StarCoder-Base-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">91.1</td>
607
+ </tr>
608
+ <tr>
609
+ <td>StarCoder-15B</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">89.6</td>
610
+ </tr>
611
+ <tr>
612
+ <td>Qwen-7B-Chat</td><td align="center">94.7</td><td align="center">94.7</td><td align="center">85.1</td>
613
+ </tr>
614
+ <tr>
615
+ <td>Qwen-14B-Chat</td><td align="center">97.9</td><td align="center">97.9</td><td align="center">95.5</td>
616
+ </tr>
617
+ </table>
618
+
619
+ <br>
620
+
621
+ ## FAQ
622
+
623
+ 如遇到问题,敬请查阅[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
624
+
625
+ If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue.
626
+ <br>
627
+
628
+ ## 引用 (Citation)
629
+
630
+ 如果你觉得我们的工作对你有帮助,欢迎引用!
631
+
632
+ If you find our work helpful, feel free to give us a cite.
633
+
634
+ ```
635
+ @article{qwen,
636
+ title={Qwen Technical Report},
637
+ author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
638
+ journal={arXiv preprint arXiv:2309.16609},
639
+ year={2023}
640
+ }
641
+ ```
642
+ <br>
643
+
644
+ ## 使用协议(License Agreement)
645
+
646
+ 我们的代码和模型权重对学术研究完全开放,并支持商用。请查看[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)了解具体的开源协议细节。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
647
+
648
+ Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/qianwen) to apply.
649
+ <br>
650
+
651
+ ## 联系我们(Contact Us)
652
+
653
+ 如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群、钉钉群以及Discord!同时,也欢迎通过邮件([email protected])联系我们。
654
+
655
+ If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to [email protected].
656
+
config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/data9/syt/qwen/Qwen-7B-Chat/",
3
+ "architectures": [
4
+ "QWenLMHeadModel"
5
+ ],
6
+ "attn_dropout_prob": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_qwen.QWenConfig",
9
+ "AutoModelForCausalLM": "modeling_qwen.QWenLMHeadModel"
10
+ },
11
+ "bf16": true,
12
+ "emb_dropout_prob": 0.0,
13
+ "fp16": false,
14
+ "fp32": false,
15
+ "hidden_size": 4096,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 22016,
18
+ "kv_channels": 128,
19
+ "layer_norm_epsilon": 1e-06,
20
+ "max_position_embeddings": 8192,
21
+ "model_type": "qwen",
22
+ "no_bias": true,
23
+ "num_attention_heads": 32,
24
+ "num_hidden_layers": 32,
25
+ "onnx_safe": null,
26
+ "rotary_emb_base": 10000,
27
+ "rotary_pct": 1.0,
28
+ "scale_attn_weights": true,
29
+ "seq_length": 8192,
30
+ "softmax_in_fp32": false,
31
+ "tie_word_embeddings": false,
32
+ "tokenizer_class": "QWenTokenizer",
33
+ "torch_dtype": "bfloat16",
34
+ "transformers_version": "4.36.2",
35
+ "use_cache": true,
36
+ "use_cache_kernel": false,
37
+ "use_cache_quantization": false,
38
+ "use_dynamic_ntk": true,
39
+ "use_flash_attn": true,
40
+ "use_logn_attn": true,
41
+ "vocab_size": 151936
42
+ }
configuration_qwen.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ from transformers import PretrainedConfig
7
+
8
+
9
+ class QWenConfig(PretrainedConfig):
10
+ model_type = "qwen"
11
+ keys_to_ignore_at_inference = ["past_key_values"]
12
+
13
+ def __init__(
14
+ self,
15
+ vocab_size=151936,
16
+ hidden_size=4096,
17
+ num_hidden_layers=32,
18
+ num_attention_heads=32,
19
+ emb_dropout_prob=0.0,
20
+ attn_dropout_prob=0.0,
21
+ layer_norm_epsilon=1e-6,
22
+ initializer_range=0.02,
23
+ max_position_embeddings=8192,
24
+ scale_attn_weights=True,
25
+ use_cache=True,
26
+ bf16=False,
27
+ fp16=False,
28
+ fp32=False,
29
+ kv_channels=128,
30
+ rotary_pct=1.0,
31
+ rotary_emb_base=10000,
32
+ use_dynamic_ntk=True,
33
+ use_logn_attn=True,
34
+ use_flash_attn="auto",
35
+ intermediate_size=22016,
36
+ no_bias=True,
37
+ tie_word_embeddings=False,
38
+ use_cache_quantization=False,
39
+ use_cache_kernel=False,
40
+ softmax_in_fp32=False,
41
+ **kwargs,
42
+ ):
43
+ self.vocab_size = vocab_size
44
+ self.hidden_size = hidden_size
45
+ self.intermediate_size = intermediate_size
46
+ self.num_hidden_layers = num_hidden_layers
47
+ self.num_attention_heads = num_attention_heads
48
+ self.emb_dropout_prob = emb_dropout_prob
49
+ self.attn_dropout_prob = attn_dropout_prob
50
+ self.layer_norm_epsilon = layer_norm_epsilon
51
+ self.initializer_range = initializer_range
52
+ self.scale_attn_weights = scale_attn_weights
53
+ self.use_cache = use_cache
54
+ self.max_position_embeddings = max_position_embeddings
55
+ self.bf16 = bf16
56
+ self.fp16 = fp16
57
+ self.fp32 = fp32
58
+ self.kv_channels = kv_channels
59
+ self.rotary_pct = rotary_pct
60
+ self.rotary_emb_base = rotary_emb_base
61
+ self.use_dynamic_ntk = use_dynamic_ntk
62
+ self.use_logn_attn = use_logn_attn
63
+ self.use_flash_attn = use_flash_attn
64
+ self.no_bias = no_bias
65
+ self.use_cache_quantization = use_cache_quantization
66
+ self.use_cache_kernel = use_cache_kernel
67
+ self.softmax_in_fp32 = softmax_in_fp32
68
+ super().__init__(
69
+ tie_word_embeddings=tie_word_embeddings,
70
+ **kwargs
71
+ )
cpp_kernels.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch.utils import cpp_extension
2
+ import pathlib
3
+ import os
4
+ import subprocess
5
+
6
+ def _get_cuda_bare_metal_version(cuda_dir):
7
+ raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"],
8
+ universal_newlines=True)
9
+ output = raw_output.split()
10
+ release_idx = output.index("release") + 1
11
+ release = output[release_idx].split(".")
12
+ bare_metal_major = release[0]
13
+ bare_metal_minor = release[1][0]
14
+
15
+ return raw_output, bare_metal_major, bare_metal_minor
16
+
17
+ def _create_build_dir(buildpath):
18
+ try:
19
+ os.mkdir(buildpath)
20
+ except OSError:
21
+ if not os.path.isdir(buildpath):
22
+ print(f"Creation of the build directory {buildpath} failed")
23
+
24
+ # Check if cuda 11 is installed for compute capability 8.0
25
+ cc_flag = []
26
+ _, bare_metal_major, bare_metal_minor = _get_cuda_bare_metal_version(cpp_extension.CUDA_HOME)
27
+ if int(bare_metal_major) >= 11:
28
+ cc_flag.append('-gencode')
29
+ cc_flag.append('arch=compute_80,code=sm_80')
30
+ if int(bare_metal_minor) >= 7:
31
+ cc_flag.append('-gencode')
32
+ cc_flag.append('arch=compute_90,code=sm_90')
33
+
34
+ # Build path
35
+ srcpath = pathlib.Path(__file__).parent.absolute()
36
+ buildpath = srcpath / 'build'
37
+ _create_build_dir(buildpath)
38
+
39
+ def _cpp_extention_load_helper(name, sources, extra_cuda_flags):
40
+ return cpp_extension.load(
41
+ name=name,
42
+ sources=sources,
43
+ build_directory=buildpath,
44
+ extra_cflags=['-O3', ],
45
+ extra_cuda_cflags=['-O3',
46
+ '-gencode', 'arch=compute_70,code=sm_70',
47
+ '--use_fast_math'] + extra_cuda_flags + cc_flag,
48
+ verbose=1
49
+ )
50
+
51
+ extra_flags = []
52
+
53
+ cache_autogptq_cuda_256_sources = ["./cache_autogptq_cuda_256.cpp",
54
+ "./cache_autogptq_cuda_kernel_256.cu"]
55
+ cache_autogptq_cuda_256 = _cpp_extention_load_helper("cache_autogptq_cuda_256", cache_autogptq_cuda_256_sources, extra_flags)
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chat_format": "chatml",
3
+ "do_sample": true,
4
+ "eos_token_id": 151643,
5
+ "max_new_tokens": 512,
6
+ "max_window_size": 6144,
7
+ "pad_token_id": 151643,
8
+ "repetition_penalty": 1.1,
9
+ "top_k": 0,
10
+ "top_p": 0.8,
11
+ "transformers_version": "4.36.2"
12
+ }
model-00001-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7775484b9b38a70a731f59bcefd582aa408d5e0e258c10437e98dca4fd4f632a
3
+ size 1964066488
model-00002-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d60ceb227e024b6640d92e2dd2b71c38352ee39c3228b997c5ecc20176c3747
3
+ size 2023960808
model-00003-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8610f2780afe5c753e3f9e2f9e0889ef025a8a8eb44da6cfd03ddad881e9afe
3
+ size 2023960816
model-00004-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:778dfc0d00694f063de05040627ff626f15fb22b3920530a0463bc6e14c6b74e
3
+ size 2023960848
model-00005-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4570c9a97fa88f9381c2193302b553566c0a9a022022e8f25ab71dd862df308f
3
+ size 2023960848
model-00006-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a438a54e5b42a5d35b74f54ce3449c8c4817cbbc50988dd2ecb32d8b8603285
3
+ size 2023960848
model-00007-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b941d8b419cc8131cb7aa54aa06d297c091a964532d311f639f47b156062101d
3
+ size 2023960848
model-00008-of-00008.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78a8add8ea154ee6b48a78f6da85cde6078a4c9e1b0931b028f349283124edfa
3
+ size 1334845784
model.safetensors.index.json ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 15442649088
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00008-of-00008.safetensors",
7
+ "transformer.h.0.attn.c_attn.bias": "model-00001-of-00008.safetensors",
8
+ "transformer.h.0.attn.c_attn.weight": "model-00001-of-00008.safetensors",
9
+ "transformer.h.0.attn.c_proj.weight": "model-00001-of-00008.safetensors",
10
+ "transformer.h.0.ln_1.weight": "model-00001-of-00008.safetensors",
11
+ "transformer.h.0.ln_2.weight": "model-00001-of-00008.safetensors",
12
+ "transformer.h.0.mlp.c_proj.weight": "model-00001-of-00008.safetensors",
13
+ "transformer.h.0.mlp.w1.weight": "model-00001-of-00008.safetensors",
14
+ "transformer.h.0.mlp.w2.weight": "model-00001-of-00008.safetensors",
15
+ "transformer.h.1.attn.c_attn.bias": "model-00001-of-00008.safetensors",
16
+ "transformer.h.1.attn.c_attn.weight": "model-00001-of-00008.safetensors",
17
+ "transformer.h.1.attn.c_proj.weight": "model-00001-of-00008.safetensors",
18
+ "transformer.h.1.ln_1.weight": "model-00001-of-00008.safetensors",
19
+ "transformer.h.1.ln_2.weight": "model-00001-of-00008.safetensors",
20
+ "transformer.h.1.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
21
+ "transformer.h.1.mlp.w1.weight": "model-00001-of-00008.safetensors",
22
+ "transformer.h.1.mlp.w2.weight": "model-00001-of-00008.safetensors",
23
+ "transformer.h.10.attn.c_attn.bias": "model-00003-of-00008.safetensors",
24
+ "transformer.h.10.attn.c_attn.weight": "model-00003-of-00008.safetensors",
25
+ "transformer.h.10.attn.c_proj.weight": "model-00003-of-00008.safetensors",
26
+ "transformer.h.10.ln_1.weight": "model-00003-of-00008.safetensors",
27
+ "transformer.h.10.ln_2.weight": "model-00003-of-00008.safetensors",
28
+ "transformer.h.10.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
29
+ "transformer.h.10.mlp.w1.weight": "model-00003-of-00008.safetensors",
30
+ "transformer.h.10.mlp.w2.weight": "model-00003-of-00008.safetensors",
31
+ "transformer.h.11.attn.c_attn.bias": "model-00003-of-00008.safetensors",
32
+ "transformer.h.11.attn.c_attn.weight": "model-00003-of-00008.safetensors",
33
+ "transformer.h.11.attn.c_proj.weight": "model-00003-of-00008.safetensors",
34
+ "transformer.h.11.ln_1.weight": "model-00003-of-00008.safetensors",
35
+ "transformer.h.11.ln_2.weight": "model-00003-of-00008.safetensors",
36
+ "transformer.h.11.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
37
+ "transformer.h.11.mlp.w1.weight": "model-00003-of-00008.safetensors",
38
+ "transformer.h.11.mlp.w2.weight": "model-00003-of-00008.safetensors",
39
+ "transformer.h.12.attn.c_attn.bias": "model-00004-of-00008.safetensors",
40
+ "transformer.h.12.attn.c_attn.weight": "model-00004-of-00008.safetensors",
41
+ "transformer.h.12.attn.c_proj.weight": "model-00004-of-00008.safetensors",
42
+ "transformer.h.12.ln_1.weight": "model-00004-of-00008.safetensors",
43
+ "transformer.h.12.ln_2.weight": "model-00004-of-00008.safetensors",
44
+ "transformer.h.12.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
45
+ "transformer.h.12.mlp.w1.weight": "model-00004-of-00008.safetensors",
46
+ "transformer.h.12.mlp.w2.weight": "model-00004-of-00008.safetensors",
47
+ "transformer.h.13.attn.c_attn.bias": "model-00004-of-00008.safetensors",
48
+ "transformer.h.13.attn.c_attn.weight": "model-00004-of-00008.safetensors",
49
+ "transformer.h.13.attn.c_proj.weight": "model-00004-of-00008.safetensors",
50
+ "transformer.h.13.ln_1.weight": "model-00004-of-00008.safetensors",
51
+ "transformer.h.13.ln_2.weight": "model-00004-of-00008.safetensors",
52
+ "transformer.h.13.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
53
+ "transformer.h.13.mlp.w1.weight": "model-00004-of-00008.safetensors",
54
+ "transformer.h.13.mlp.w2.weight": "model-00004-of-00008.safetensors",
55
+ "transformer.h.14.attn.c_attn.bias": "model-00004-of-00008.safetensors",
56
+ "transformer.h.14.attn.c_attn.weight": "model-00004-of-00008.safetensors",
57
+ "transformer.h.14.attn.c_proj.weight": "model-00004-of-00008.safetensors",
58
+ "transformer.h.14.ln_1.weight": "model-00004-of-00008.safetensors",
59
+ "transformer.h.14.ln_2.weight": "model-00004-of-00008.safetensors",
60
+ "transformer.h.14.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
61
+ "transformer.h.14.mlp.w1.weight": "model-00004-of-00008.safetensors",
62
+ "transformer.h.14.mlp.w2.weight": "model-00004-of-00008.safetensors",
63
+ "transformer.h.15.attn.c_attn.bias": "model-00004-of-00008.safetensors",
64
+ "transformer.h.15.attn.c_attn.weight": "model-00004-of-00008.safetensors",
65
+ "transformer.h.15.attn.c_proj.weight": "model-00004-of-00008.safetensors",
66
+ "transformer.h.15.ln_1.weight": "model-00004-of-00008.safetensors",
67
+ "transformer.h.15.ln_2.weight": "model-00004-of-00008.safetensors",
68
+ "transformer.h.15.mlp.c_proj.weight": "model-00004-of-00008.safetensors",
69
+ "transformer.h.15.mlp.w1.weight": "model-00004-of-00008.safetensors",
70
+ "transformer.h.15.mlp.w2.weight": "model-00004-of-00008.safetensors",
71
+ "transformer.h.16.attn.c_attn.bias": "model-00004-of-00008.safetensors",
72
+ "transformer.h.16.attn.c_attn.weight": "model-00004-of-00008.safetensors",
73
+ "transformer.h.16.attn.c_proj.weight": "model-00004-of-00008.safetensors",
74
+ "transformer.h.16.ln_1.weight": "model-00004-of-00008.safetensors",
75
+ "transformer.h.16.ln_2.weight": "model-00004-of-00008.safetensors",
76
+ "transformer.h.16.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
77
+ "transformer.h.16.mlp.w1.weight": "model-00004-of-00008.safetensors",
78
+ "transformer.h.16.mlp.w2.weight": "model-00004-of-00008.safetensors",
79
+ "transformer.h.17.attn.c_attn.bias": "model-00005-of-00008.safetensors",
80
+ "transformer.h.17.attn.c_attn.weight": "model-00005-of-00008.safetensors",
81
+ "transformer.h.17.attn.c_proj.weight": "model-00005-of-00008.safetensors",
82
+ "transformer.h.17.ln_1.weight": "model-00005-of-00008.safetensors",
83
+ "transformer.h.17.ln_2.weight": "model-00005-of-00008.safetensors",
84
+ "transformer.h.17.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
85
+ "transformer.h.17.mlp.w1.weight": "model-00005-of-00008.safetensors",
86
+ "transformer.h.17.mlp.w2.weight": "model-00005-of-00008.safetensors",
87
+ "transformer.h.18.attn.c_attn.bias": "model-00005-of-00008.safetensors",
88
+ "transformer.h.18.attn.c_attn.weight": "model-00005-of-00008.safetensors",
89
+ "transformer.h.18.attn.c_proj.weight": "model-00005-of-00008.safetensors",
90
+ "transformer.h.18.ln_1.weight": "model-00005-of-00008.safetensors",
91
+ "transformer.h.18.ln_2.weight": "model-00005-of-00008.safetensors",
92
+ "transformer.h.18.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
93
+ "transformer.h.18.mlp.w1.weight": "model-00005-of-00008.safetensors",
94
+ "transformer.h.18.mlp.w2.weight": "model-00005-of-00008.safetensors",
95
+ "transformer.h.19.attn.c_attn.bias": "model-00005-of-00008.safetensors",
96
+ "transformer.h.19.attn.c_attn.weight": "model-00005-of-00008.safetensors",
97
+ "transformer.h.19.attn.c_proj.weight": "model-00005-of-00008.safetensors",
98
+ "transformer.h.19.ln_1.weight": "model-00005-of-00008.safetensors",
99
+ "transformer.h.19.ln_2.weight": "model-00005-of-00008.safetensors",
100
+ "transformer.h.19.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
101
+ "transformer.h.19.mlp.w1.weight": "model-00005-of-00008.safetensors",
102
+ "transformer.h.19.mlp.w2.weight": "model-00005-of-00008.safetensors",
103
+ "transformer.h.2.attn.c_attn.bias": "model-00002-of-00008.safetensors",
104
+ "transformer.h.2.attn.c_attn.weight": "model-00002-of-00008.safetensors",
105
+ "transformer.h.2.attn.c_proj.weight": "model-00002-of-00008.safetensors",
106
+ "transformer.h.2.ln_1.weight": "model-00002-of-00008.safetensors",
107
+ "transformer.h.2.ln_2.weight": "model-00002-of-00008.safetensors",
108
+ "transformer.h.2.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
109
+ "transformer.h.2.mlp.w1.weight": "model-00002-of-00008.safetensors",
110
+ "transformer.h.2.mlp.w2.weight": "model-00002-of-00008.safetensors",
111
+ "transformer.h.20.attn.c_attn.bias": "model-00005-of-00008.safetensors",
112
+ "transformer.h.20.attn.c_attn.weight": "model-00005-of-00008.safetensors",
113
+ "transformer.h.20.attn.c_proj.weight": "model-00005-of-00008.safetensors",
114
+ "transformer.h.20.ln_1.weight": "model-00005-of-00008.safetensors",
115
+ "transformer.h.20.ln_2.weight": "model-00005-of-00008.safetensors",
116
+ "transformer.h.20.mlp.c_proj.weight": "model-00005-of-00008.safetensors",
117
+ "transformer.h.20.mlp.w1.weight": "model-00005-of-00008.safetensors",
118
+ "transformer.h.20.mlp.w2.weight": "model-00005-of-00008.safetensors",
119
+ "transformer.h.21.attn.c_attn.bias": "model-00005-of-00008.safetensors",
120
+ "transformer.h.21.attn.c_attn.weight": "model-00005-of-00008.safetensors",
121
+ "transformer.h.21.attn.c_proj.weight": "model-00005-of-00008.safetensors",
122
+ "transformer.h.21.ln_1.weight": "model-00005-of-00008.safetensors",
123
+ "transformer.h.21.ln_2.weight": "model-00005-of-00008.safetensors",
124
+ "transformer.h.21.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
125
+ "transformer.h.21.mlp.w1.weight": "model-00005-of-00008.safetensors",
126
+ "transformer.h.21.mlp.w2.weight": "model-00005-of-00008.safetensors",
127
+ "transformer.h.22.attn.c_attn.bias": "model-00006-of-00008.safetensors",
128
+ "transformer.h.22.attn.c_attn.weight": "model-00006-of-00008.safetensors",
129
+ "transformer.h.22.attn.c_proj.weight": "model-00006-of-00008.safetensors",
130
+ "transformer.h.22.ln_1.weight": "model-00006-of-00008.safetensors",
131
+ "transformer.h.22.ln_2.weight": "model-00006-of-00008.safetensors",
132
+ "transformer.h.22.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
133
+ "transformer.h.22.mlp.w1.weight": "model-00006-of-00008.safetensors",
134
+ "transformer.h.22.mlp.w2.weight": "model-00006-of-00008.safetensors",
135
+ "transformer.h.23.attn.c_attn.bias": "model-00006-of-00008.safetensors",
136
+ "transformer.h.23.attn.c_attn.weight": "model-00006-of-00008.safetensors",
137
+ "transformer.h.23.attn.c_proj.weight": "model-00006-of-00008.safetensors",
138
+ "transformer.h.23.ln_1.weight": "model-00006-of-00008.safetensors",
139
+ "transformer.h.23.ln_2.weight": "model-00006-of-00008.safetensors",
140
+ "transformer.h.23.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
141
+ "transformer.h.23.mlp.w1.weight": "model-00006-of-00008.safetensors",
142
+ "transformer.h.23.mlp.w2.weight": "model-00006-of-00008.safetensors",
143
+ "transformer.h.24.attn.c_attn.bias": "model-00006-of-00008.safetensors",
144
+ "transformer.h.24.attn.c_attn.weight": "model-00006-of-00008.safetensors",
145
+ "transformer.h.24.attn.c_proj.weight": "model-00006-of-00008.safetensors",
146
+ "transformer.h.24.ln_1.weight": "model-00006-of-00008.safetensors",
147
+ "transformer.h.24.ln_2.weight": "model-00006-of-00008.safetensors",
148
+ "transformer.h.24.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
149
+ "transformer.h.24.mlp.w1.weight": "model-00006-of-00008.safetensors",
150
+ "transformer.h.24.mlp.w2.weight": "model-00006-of-00008.safetensors",
151
+ "transformer.h.25.attn.c_attn.bias": "model-00006-of-00008.safetensors",
152
+ "transformer.h.25.attn.c_attn.weight": "model-00006-of-00008.safetensors",
153
+ "transformer.h.25.attn.c_proj.weight": "model-00006-of-00008.safetensors",
154
+ "transformer.h.25.ln_1.weight": "model-00006-of-00008.safetensors",
155
+ "transformer.h.25.ln_2.weight": "model-00006-of-00008.safetensors",
156
+ "transformer.h.25.mlp.c_proj.weight": "model-00006-of-00008.safetensors",
157
+ "transformer.h.25.mlp.w1.weight": "model-00006-of-00008.safetensors",
158
+ "transformer.h.25.mlp.w2.weight": "model-00006-of-00008.safetensors",
159
+ "transformer.h.26.attn.c_attn.bias": "model-00006-of-00008.safetensors",
160
+ "transformer.h.26.attn.c_attn.weight": "model-00006-of-00008.safetensors",
161
+ "transformer.h.26.attn.c_proj.weight": "model-00006-of-00008.safetensors",
162
+ "transformer.h.26.ln_1.weight": "model-00006-of-00008.safetensors",
163
+ "transformer.h.26.ln_2.weight": "model-00006-of-00008.safetensors",
164
+ "transformer.h.26.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
165
+ "transformer.h.26.mlp.w1.weight": "model-00006-of-00008.safetensors",
166
+ "transformer.h.26.mlp.w2.weight": "model-00006-of-00008.safetensors",
167
+ "transformer.h.27.attn.c_attn.bias": "model-00007-of-00008.safetensors",
168
+ "transformer.h.27.attn.c_attn.weight": "model-00007-of-00008.safetensors",
169
+ "transformer.h.27.attn.c_proj.weight": "model-00007-of-00008.safetensors",
170
+ "transformer.h.27.ln_1.weight": "model-00007-of-00008.safetensors",
171
+ "transformer.h.27.ln_2.weight": "model-00007-of-00008.safetensors",
172
+ "transformer.h.27.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
173
+ "transformer.h.27.mlp.w1.weight": "model-00007-of-00008.safetensors",
174
+ "transformer.h.27.mlp.w2.weight": "model-00007-of-00008.safetensors",
175
+ "transformer.h.28.attn.c_attn.bias": "model-00007-of-00008.safetensors",
176
+ "transformer.h.28.attn.c_attn.weight": "model-00007-of-00008.safetensors",
177
+ "transformer.h.28.attn.c_proj.weight": "model-00007-of-00008.safetensors",
178
+ "transformer.h.28.ln_1.weight": "model-00007-of-00008.safetensors",
179
+ "transformer.h.28.ln_2.weight": "model-00007-of-00008.safetensors",
180
+ "transformer.h.28.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
181
+ "transformer.h.28.mlp.w1.weight": "model-00007-of-00008.safetensors",
182
+ "transformer.h.28.mlp.w2.weight": "model-00007-of-00008.safetensors",
183
+ "transformer.h.29.attn.c_attn.bias": "model-00007-of-00008.safetensors",
184
+ "transformer.h.29.attn.c_attn.weight": "model-00007-of-00008.safetensors",
185
+ "transformer.h.29.attn.c_proj.weight": "model-00007-of-00008.safetensors",
186
+ "transformer.h.29.ln_1.weight": "model-00007-of-00008.safetensors",
187
+ "transformer.h.29.ln_2.weight": "model-00007-of-00008.safetensors",
188
+ "transformer.h.29.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
189
+ "transformer.h.29.mlp.w1.weight": "model-00007-of-00008.safetensors",
190
+ "transformer.h.29.mlp.w2.weight": "model-00007-of-00008.safetensors",
191
+ "transformer.h.3.attn.c_attn.bias": "model-00002-of-00008.safetensors",
192
+ "transformer.h.3.attn.c_attn.weight": "model-00002-of-00008.safetensors",
193
+ "transformer.h.3.attn.c_proj.weight": "model-00002-of-00008.safetensors",
194
+ "transformer.h.3.ln_1.weight": "model-00002-of-00008.safetensors",
195
+ "transformer.h.3.ln_2.weight": "model-00002-of-00008.safetensors",
196
+ "transformer.h.3.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
197
+ "transformer.h.3.mlp.w1.weight": "model-00002-of-00008.safetensors",
198
+ "transformer.h.3.mlp.w2.weight": "model-00002-of-00008.safetensors",
199
+ "transformer.h.30.attn.c_attn.bias": "model-00007-of-00008.safetensors",
200
+ "transformer.h.30.attn.c_attn.weight": "model-00007-of-00008.safetensors",
201
+ "transformer.h.30.attn.c_proj.weight": "model-00007-of-00008.safetensors",
202
+ "transformer.h.30.ln_1.weight": "model-00007-of-00008.safetensors",
203
+ "transformer.h.30.ln_2.weight": "model-00007-of-00008.safetensors",
204
+ "transformer.h.30.mlp.c_proj.weight": "model-00007-of-00008.safetensors",
205
+ "transformer.h.30.mlp.w1.weight": "model-00007-of-00008.safetensors",
206
+ "transformer.h.30.mlp.w2.weight": "model-00007-of-00008.safetensors",
207
+ "transformer.h.31.attn.c_attn.bias": "model-00007-of-00008.safetensors",
208
+ "transformer.h.31.attn.c_attn.weight": "model-00007-of-00008.safetensors",
209
+ "transformer.h.31.attn.c_proj.weight": "model-00007-of-00008.safetensors",
210
+ "transformer.h.31.ln_1.weight": "model-00007-of-00008.safetensors",
211
+ "transformer.h.31.ln_2.weight": "model-00007-of-00008.safetensors",
212
+ "transformer.h.31.mlp.c_proj.weight": "model-00008-of-00008.safetensors",
213
+ "transformer.h.31.mlp.w1.weight": "model-00007-of-00008.safetensors",
214
+ "transformer.h.31.mlp.w2.weight": "model-00007-of-00008.safetensors",
215
+ "transformer.h.4.attn.c_attn.bias": "model-00002-of-00008.safetensors",
216
+ "transformer.h.4.attn.c_attn.weight": "model-00002-of-00008.safetensors",
217
+ "transformer.h.4.attn.c_proj.weight": "model-00002-of-00008.safetensors",
218
+ "transformer.h.4.ln_1.weight": "model-00002-of-00008.safetensors",
219
+ "transformer.h.4.ln_2.weight": "model-00002-of-00008.safetensors",
220
+ "transformer.h.4.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
221
+ "transformer.h.4.mlp.w1.weight": "model-00002-of-00008.safetensors",
222
+ "transformer.h.4.mlp.w2.weight": "model-00002-of-00008.safetensors",
223
+ "transformer.h.5.attn.c_attn.bias": "model-00002-of-00008.safetensors",
224
+ "transformer.h.5.attn.c_attn.weight": "model-00002-of-00008.safetensors",
225
+ "transformer.h.5.attn.c_proj.weight": "model-00002-of-00008.safetensors",
226
+ "transformer.h.5.ln_1.weight": "model-00002-of-00008.safetensors",
227
+ "transformer.h.5.ln_2.weight": "model-00002-of-00008.safetensors",
228
+ "transformer.h.5.mlp.c_proj.weight": "model-00002-of-00008.safetensors",
229
+ "transformer.h.5.mlp.w1.weight": "model-00002-of-00008.safetensors",
230
+ "transformer.h.5.mlp.w2.weight": "model-00002-of-00008.safetensors",
231
+ "transformer.h.6.attn.c_attn.bias": "model-00002-of-00008.safetensors",
232
+ "transformer.h.6.attn.c_attn.weight": "model-00002-of-00008.safetensors",
233
+ "transformer.h.6.attn.c_proj.weight": "model-00002-of-00008.safetensors",
234
+ "transformer.h.6.ln_1.weight": "model-00002-of-00008.safetensors",
235
+ "transformer.h.6.ln_2.weight": "model-00002-of-00008.safetensors",
236
+ "transformer.h.6.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
237
+ "transformer.h.6.mlp.w1.weight": "model-00002-of-00008.safetensors",
238
+ "transformer.h.6.mlp.w2.weight": "model-00002-of-00008.safetensors",
239
+ "transformer.h.7.attn.c_attn.bias": "model-00003-of-00008.safetensors",
240
+ "transformer.h.7.attn.c_attn.weight": "model-00003-of-00008.safetensors",
241
+ "transformer.h.7.attn.c_proj.weight": "model-00003-of-00008.safetensors",
242
+ "transformer.h.7.ln_1.weight": "model-00003-of-00008.safetensors",
243
+ "transformer.h.7.ln_2.weight": "model-00003-of-00008.safetensors",
244
+ "transformer.h.7.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
245
+ "transformer.h.7.mlp.w1.weight": "model-00003-of-00008.safetensors",
246
+ "transformer.h.7.mlp.w2.weight": "model-00003-of-00008.safetensors",
247
+ "transformer.h.8.attn.c_attn.bias": "model-00003-of-00008.safetensors",
248
+ "transformer.h.8.attn.c_attn.weight": "model-00003-of-00008.safetensors",
249
+ "transformer.h.8.attn.c_proj.weight": "model-00003-of-00008.safetensors",
250
+ "transformer.h.8.ln_1.weight": "model-00003-of-00008.safetensors",
251
+ "transformer.h.8.ln_2.weight": "model-00003-of-00008.safetensors",
252
+ "transformer.h.8.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
253
+ "transformer.h.8.mlp.w1.weight": "model-00003-of-00008.safetensors",
254
+ "transformer.h.8.mlp.w2.weight": "model-00003-of-00008.safetensors",
255
+ "transformer.h.9.attn.c_attn.bias": "model-00003-of-00008.safetensors",
256
+ "transformer.h.9.attn.c_attn.weight": "model-00003-of-00008.safetensors",
257
+ "transformer.h.9.attn.c_proj.weight": "model-00003-of-00008.safetensors",
258
+ "transformer.h.9.ln_1.weight": "model-00003-of-00008.safetensors",
259
+ "transformer.h.9.ln_2.weight": "model-00003-of-00008.safetensors",
260
+ "transformer.h.9.mlp.c_proj.weight": "model-00003-of-00008.safetensors",
261
+ "transformer.h.9.mlp.w1.weight": "model-00003-of-00008.safetensors",
262
+ "transformer.h.9.mlp.w2.weight": "model-00003-of-00008.safetensors",
263
+ "transformer.ln_f.weight": "model-00008-of-00008.safetensors",
264
+ "transformer.wte.weight": "model-00001-of-00008.safetensors"
265
+ }
266
+ }
modeling_qwen.py ADDED
@@ -0,0 +1,1365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ import copy
7
+ import importlib
8
+ import math
9
+ import pathlib
10
+ from typing import TYPE_CHECKING, Optional, Tuple, Union, Callable, List, Any, Generator
11
+
12
+ import torch
13
+ import torch.nn.functional as F
14
+ import torch.utils.checkpoint
15
+ import warnings
16
+
17
+ from torch.nn import CrossEntropyLoss
18
+ from transformers import PreTrainedTokenizer, GenerationConfig, StoppingCriteriaList
19
+ from transformers.generation.logits_process import LogitsProcessorList
20
+
21
+ if TYPE_CHECKING:
22
+ from transformers.generation.streamers import BaseStreamer
23
+ from transformers.generation.utils import GenerateOutput
24
+ from transformers.modeling_outputs import (
25
+ BaseModelOutputWithPast,
26
+ CausalLMOutputWithPast,
27
+ )
28
+ from transformers.modeling_utils import PreTrainedModel
29
+ from transformers.utils import logging
30
+
31
+ try:
32
+ from einops import rearrange
33
+ except ImportError:
34
+ rearrange = None
35
+ from torch import nn
36
+
37
+ SUPPORT_CUDA = torch.cuda.is_available()
38
+ SUPPORT_BF16 = SUPPORT_CUDA and torch.cuda.is_bf16_supported()
39
+ SUPPORT_FP16 = SUPPORT_CUDA and torch.cuda.get_device_capability(0)[0] >= 7
40
+ SUPPORT_TORCH2 = hasattr(torch, '__version__') and int(torch.__version__.split(".")[0]) >= 2
41
+
42
+
43
+ from .configuration_qwen import QWenConfig
44
+ from .qwen_generation_utils import (
45
+ HistoryType,
46
+ make_context,
47
+ decode_tokens,
48
+ get_stop_words_ids,
49
+ StopWordsLogitsProcessor,
50
+ )
51
+
52
+
53
+ logger = logging.get_logger(__name__)
54
+
55
+ _CHECKPOINT_FOR_DOC = "qwen"
56
+ _CONFIG_FOR_DOC = "QWenConfig"
57
+
58
+ QWen_PRETRAINED_MODEL_ARCHIVE_LIST = ["qwen-7b"]
59
+
60
+ _ERROR_BAD_CHAT_FORMAT = """\
61
+ We detect you are probably using the pretrained model (rather than chat model) for chatting, since the chat_format in generation_config is not "chatml".
62
+ If you are directly using the model downloaded from Huggingface, please make sure you are using our "Qwen/Qwen-7B-Chat" Huggingface model (rather than "Qwen/Qwen-7B") when you call model.chat().
63
+ 我们检测到您可能在使用预训练模型(而非chat模型)进行多轮chat,因为您当前在generation_config指定的chat_format,并未设置为我们在对话中所支持的"chatml"格式。
64
+ 如果您在直接使用我们从Huggingface提供的模型,请确保您在调用model.chat()时,使用的是"Qwen/Qwen-7B-Chat"模型(而非"Qwen/Qwen-7B"预训练模型)。
65
+ """
66
+
67
+ _SENTINEL = object()
68
+ _ERROR_STREAM_IN_CHAT = """\
69
+ Pass argument `stream` to model.chat() is buggy, deprecated, and marked for removal. Please use model.chat_stream(...) instead of model.chat(..., stream=True).
70
+ 向model.chat()传入参数stream的用法可能存在Bug,该用法已被废弃,将在未来被移除。请使用model.chat_stream(...)代替model.chat(..., stream=True)。
71
+ """
72
+
73
+ _ERROR_INPUT_CPU_QUERY_WITH_FLASH_ATTN_ACTIVATED = """\
74
+ We detect you have activated flash attention support, but running model computation on CPU. Please make sure that your input data has been placed on GPU. If you actually want to run CPU computation, please following the readme and set device_map="cpu" to disable flash attention when loading the model (calling AutoModelForCausalLM.from_pretrained).
75
+ 检测到您的模型已激活了flash attention支持,但正在执行CPU运算任务。如使用flash attention,请您确认模型输入已经传到GPU上。如果您确认要执行CPU运算,请您在载入模型(调用AutoModelForCausalLM.from_pretrained)时,按照readme说法,指定device_map="cpu"以禁用flash attention。
76
+ """
77
+
78
+ apply_rotary_emb_func = None
79
+ rms_norm = None
80
+ flash_attn_unpadded_func = None
81
+ flash_attn_func = None
82
+
83
+ def _import_flash_attn():
84
+ global apply_rotary_emb_func, rms_norm, flash_attn_unpadded_func, flash_attn_func
85
+ try:
86
+ from flash_attn.layers.rotary import apply_rotary_emb_func as __apply_rotary_emb_func
87
+ apply_rotary_emb_func = __apply_rotary_emb_func
88
+ except ImportError:
89
+ logger.warn(
90
+ "Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency "
91
+ "https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary"
92
+ )
93
+
94
+ try:
95
+ from flash_attn.ops.rms_norm import rms_norm as __rms_norm
96
+ rms_norm = __rms_norm
97
+ except ImportError:
98
+ logger.warn(
99
+ "Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency "
100
+ "https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm"
101
+ )
102
+
103
+ try:
104
+ import flash_attn
105
+ _flash_attn_func = None
106
+ if not hasattr(flash_attn, '__version__'):
107
+ from flash_attn.flash_attn_interface import flash_attn_unpadded_func as __flash_attn_unpadded_func
108
+ else:
109
+ if int(flash_attn.__version__.split(".")[0]) >= 2:
110
+ if int(flash_attn.__version__.split(".")[1]) >= 1:
111
+ from flash_attn.flash_attn_interface import flash_attn_func as _flash_attn_func
112
+ from flash_attn.flash_attn_interface import flash_attn_varlen_func as __flash_attn_unpadded_func
113
+ else:
114
+ from flash_attn.flash_attn_interface import flash_attn_unpadded_func as __flash_attn_unpadded_func
115
+ flash_attn_unpadded_func = __flash_attn_unpadded_func
116
+ flash_attn_func = _flash_attn_func
117
+ except ImportError:
118
+ logger.warn(
119
+ "Warning: import flash_attn fail, please install FlashAttention to get higher efficiency "
120
+ "https://github.com/Dao-AILab/flash-attention"
121
+ )
122
+
123
+ def quantize_cache_v(fdata, bits, qmax, qmin):
124
+ # b, s, head, h-dim->b, head, s, h-dim
125
+ qtype = torch.uint8
126
+ device = fdata.device
127
+ shape = fdata.shape
128
+
129
+ fdata_cal = torch.flatten(fdata, 2)
130
+ fmax = torch.amax(fdata_cal, dim=-1, keepdim=True)
131
+ fmin = torch.amin(fdata_cal, dim=-1, keepdim=True)
132
+ # Compute params
133
+ if qmax.device != fmax.device:
134
+ qmax = qmax.to(device)
135
+ qmin = qmin.to(device)
136
+ scale = (fmax - fmin) / (qmax - qmin)
137
+ zero = qmin - fmin / scale
138
+ scale = scale.unsqueeze(-1).repeat(1,1,shape[2],1).contiguous()
139
+ zero = zero.unsqueeze(-1).repeat(1,1,shape[2],1).contiguous()
140
+ # Quantize
141
+ res_data = fdata / scale + zero
142
+ qdata = torch.clamp(res_data, qmin, qmax).to(qtype)
143
+ return qdata.contiguous(), scale, zero
144
+
145
+ def dequantize_cache_torch(qdata, scale, zero):
146
+ data = scale * (qdata - zero)
147
+ return data
148
+
149
+ class FlashSelfAttention(torch.nn.Module):
150
+ def __init__(
151
+ self,
152
+ causal=False,
153
+ softmax_scale=None,
154
+ attention_dropout=0.0,
155
+ ):
156
+ super().__init__()
157
+ assert flash_attn_unpadded_func is not None, (
158
+ "Please install FlashAttention first, " "e.g., with pip install flash-attn"
159
+ )
160
+ assert (
161
+ rearrange is not None
162
+ ), "Please install einops first, e.g., with pip install einops"
163
+ self.causal = causal
164
+ self.softmax_scale = softmax_scale
165
+ self.dropout_p = attention_dropout
166
+
167
+ def unpad_input(self, hidden_states, attention_mask):
168
+ valid_mask = attention_mask.squeeze(1).squeeze(1).eq(0)
169
+ seqlens_in_batch = valid_mask.sum(dim=-1, dtype=torch.int32)
170
+ indices = torch.nonzero(valid_mask.flatten(), as_tuple=False).flatten()
171
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
172
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0))
173
+ hidden_states = hidden_states[indices]
174
+ return hidden_states, indices, cu_seqlens, max_seqlen_in_batch
175
+
176
+ def pad_input(self, hidden_states, indices, batch, seqlen):
177
+ output = torch.zeros(batch * seqlen, *hidden_states.shape[1:], device=hidden_states.device,
178
+ dtype=hidden_states.dtype)
179
+ output[indices] = hidden_states
180
+ return rearrange(output, '(b s) ... -> b s ...', b=batch)
181
+
182
+ def forward(self, q, k, v, attention_mask=None):
183
+ assert all((i.dtype in [torch.float16, torch.bfloat16] for i in (q, k, v)))
184
+ assert all((i.is_cuda for i in (q, k, v)))
185
+ batch_size, seqlen_q = q.shape[0], q.shape[1]
186
+ seqlen_k = k.shape[1]
187
+ seqlen_out = seqlen_q
188
+
189
+ if flash_attn_func is not None and batch_size == 1:
190
+ dropout_p = self.dropout_p if self.training else 0
191
+ output = flash_attn_func(q, k, v, dropout_p, softmax_scale=self.softmax_scale, causal=self.causal)
192
+ return output
193
+
194
+ q, k, v = [rearrange(x, "b s ... -> (b s) ...") for x in [q, k, v]]
195
+ cu_seqlens_q = torch.arange(
196
+ 0,
197
+ (batch_size + 1) * seqlen_q,
198
+ step=seqlen_q,
199
+ dtype=torch.int32,
200
+ device=q.device,
201
+ )
202
+
203
+ if batch_size > 1 and attention_mask is not None:
204
+ k, indices_k, cu_seqlens_k, seqlen_k = self.unpad_input(k, attention_mask)
205
+ if q.size(0) == v.size(0):
206
+ q = q[indices_k]
207
+ cu_seqlens_q = cu_seqlens_k
208
+ seqlen_q = seqlen_k
209
+ v = v[indices_k]
210
+ else:
211
+ cu_seqlens_k = torch.arange(
212
+ 0,
213
+ (batch_size + 1) * seqlen_k,
214
+ step=seqlen_k,
215
+ dtype=torch.int32,
216
+ device=q.device,
217
+ )
218
+
219
+ if self.training:
220
+ assert seqlen_k == seqlen_q
221
+ is_causal = self.causal
222
+ dropout_p = self.dropout_p
223
+ else:
224
+ is_causal = seqlen_q == seqlen_k
225
+ dropout_p = 0
226
+
227
+ output = flash_attn_unpadded_func(
228
+ q,
229
+ k,
230
+ v,
231
+ cu_seqlens_q,
232
+ cu_seqlens_k,
233
+ seqlen_q,
234
+ seqlen_k,
235
+ dropout_p,
236
+ softmax_scale=self.softmax_scale,
237
+ causal=is_causal,
238
+ )
239
+ if batch_size > 1 and attention_mask is not None and seqlen_q == seqlen_k:
240
+ output = self.pad_input(output, indices_k, batch_size, seqlen_out)
241
+ else:
242
+ new_shape = (batch_size, output.shape[0] // batch_size) + output.shape[1:]
243
+ output = output.view(new_shape)
244
+ return output
245
+
246
+
247
+ class QWenAttention(nn.Module):
248
+ def __init__(self, config):
249
+ super().__init__()
250
+
251
+ self.register_buffer("masked_bias", torch.tensor(-1e4), persistent=False)
252
+ self.seq_length = config.seq_length
253
+
254
+ self.hidden_size = config.hidden_size
255
+ self.split_size = config.hidden_size
256
+ self.num_heads = config.num_attention_heads
257
+ self.head_dim = self.hidden_size // self.num_heads
258
+
259
+ self.use_flash_attn = config.use_flash_attn
260
+ self.scale_attn_weights = True
261
+
262
+ self.projection_size = config.kv_channels * config.num_attention_heads
263
+
264
+ assert self.projection_size % config.num_attention_heads == 0
265
+ self.hidden_size_per_attention_head = (
266
+ self.projection_size // config.num_attention_heads
267
+ )
268
+
269
+ self.c_attn = nn.Linear(config.hidden_size, 3 * self.projection_size)
270
+
271
+ self.c_proj = nn.Linear(
272
+ config.hidden_size, self.projection_size, bias=not config.no_bias
273
+ )
274
+
275
+ self.is_fp32 = not (config.bf16 or config.fp16)
276
+ if (
277
+ self.use_flash_attn
278
+ and flash_attn_unpadded_func is not None
279
+ and not self.is_fp32
280
+ ):
281
+ self.core_attention_flash = FlashSelfAttention(
282
+ causal=True, attention_dropout=config.attn_dropout_prob
283
+ )
284
+ self.bf16 = config.bf16
285
+
286
+ self.use_dynamic_ntk = config.use_dynamic_ntk
287
+ self.use_logn_attn = config.use_logn_attn
288
+
289
+ logn_list = [
290
+ math.log(i, self.seq_length) if i > self.seq_length else 1
291
+ for i in range(1, 32768)
292
+ ]
293
+ logn_tensor = torch.tensor(logn_list)[None, :, None, None]
294
+ self.register_buffer("logn_tensor", logn_tensor, persistent=False)
295
+
296
+ self.attn_dropout = nn.Dropout(config.attn_dropout_prob)
297
+ self.softmax_in_fp32 = config.softmax_in_fp32 if hasattr(config, 'softmax_in_fp32') else False
298
+ self.use_cache_quantization = config.use_cache_quantization if hasattr(config, 'use_cache_quantization') else False
299
+ self.use_cache_kernel = config.use_cache_kernel if hasattr(config,'use_cache_kernel') else False
300
+ cache_dtype = torch.float
301
+ if self.bf16:
302
+ cache_dtype=torch.bfloat16
303
+ elif config.fp16:
304
+ cache_dtype = torch.float16
305
+ self.cache_qmax = torch.tensor(torch.iinfo(torch.uint8).max, dtype=cache_dtype)
306
+ self.cache_qmin = torch.tensor(torch.iinfo(torch.uint8).min, dtype=cache_dtype)
307
+
308
+ if config.use_cache_quantization and config.use_cache_kernel:
309
+ # pre check if the support files existing
310
+ module_root = pathlib.Path(__file__).parent
311
+ src_files = ("cache_autogptq_cuda_256.cpp", "cache_autogptq_cuda_kernel_256.cu")
312
+ if any(not (module_root/src).is_file() for src in src_files):
313
+ warnings.warn("KV cache kernel source files (.cpp and .cu) not found.")
314
+ self.cache_kernels = None
315
+ else:
316
+ try:
317
+ from .cpp_kernels import cache_autogptq_cuda_256
318
+ self.cache_kernels = cache_autogptq_cuda_256
319
+ except ImportError:
320
+ warnings.warn("Failed to import KV cache kernels.")
321
+ self.cache_kernels = None
322
+
323
+ def _attn(self, query, key, value, causal_mask=None, attention_mask=None, head_mask=None):
324
+ device = query.device
325
+ if self.use_cache_quantization:
326
+ qk, qk_scale, qk_zero = key
327
+ if self.use_cache_kernel and self.cache_kernels is not None:
328
+ shape = query.shape[:-1] + (qk.shape[-2],)
329
+ attn_weights = torch.zeros(shape, dtype=torch.float16, device=device)
330
+ self.cache_kernels.vecquant8matmul_batched_faster_old(
331
+ query.contiguous() if query.dtype == torch.float16 else query.to(torch.float16).contiguous(),
332
+ qk.transpose(-1, -2).contiguous(),
333
+ attn_weights,
334
+ qk_scale.contiguous() if qk_scale.dtype == torch.float16 else qk_scale.to(torch.float16).contiguous(),
335
+ qk_zero.contiguous()if qk_zero.dtype == torch.float16 else qk_zero.to(torch.float16).contiguous())
336
+ # attn_weights = attn_weights.to(query.dtype).contiguous()
337
+ else:
338
+ key = dequantize_cache_torch(qk, qk_scale, qk_zero)
339
+ attn_weights = torch.matmul(query, key.transpose(-1, -2))
340
+ else:
341
+ attn_weights = torch.matmul(query, key.transpose(-1, -2))
342
+
343
+ if self.scale_attn_weights:
344
+ if self.use_cache_quantization:
345
+ size_temp = value[0].size(-1)
346
+ else:
347
+ size_temp = value.size(-1)
348
+ attn_weights = attn_weights / (size_temp ** 0.5)
349
+
350
+ mask_value = torch.finfo(attn_weights.dtype).min
351
+ if causal_mask is not None:
352
+ attn_weights = torch.where(
353
+ causal_mask, attn_weights.to(attn_weights.dtype), mask_value
354
+ )
355
+
356
+ if attention_mask is not None:
357
+ attn_weights = attn_weights + attention_mask
358
+
359
+ if self.softmax_in_fp32:
360
+ attn_weights = nn.functional.softmax(attn_weights.float(), dim=-1)
361
+ else:
362
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
363
+
364
+ attn_weights = attn_weights.type(query.dtype)
365
+ attn_weights = self.attn_dropout(attn_weights)
366
+
367
+ if head_mask is not None:
368
+ attn_weights = attn_weights * head_mask
369
+
370
+ if self.use_cache_quantization:
371
+ qv, qv_scale, qv_zero = value
372
+ if self.use_cache_kernel and self.cache_kernels is not None:
373
+ shape = attn_weights.shape[:-1] + (query.shape[-1],)
374
+ attn_output = torch.zeros(shape, dtype=torch.float16, device=device)
375
+ self.cache_kernels.vecquant8matmul_batched_column_compression_faster_old(
376
+ attn_weights.contiguous() if attn_weights.dtype == torch.float16 else attn_weights.to(torch.float16).contiguous(),
377
+ qv.contiguous(), # dtype: int32
378
+ attn_output,
379
+ qv_scale.contiguous() if qv_scale.dtype == torch.float16 else qv_scale.to(torch.float16).contiguous(),
380
+ qv_zero.contiguous() if qv_zero.dtype == torch.float16 else qv_zero.to(torch.float16).contiguous())
381
+ if attn_output.dtype != query.dtype:
382
+ attn_output = attn_output.to(query.dtype)
383
+ attn_weights = attn_weights.to(query.dtype)
384
+ else:
385
+ value = dequantize_cache_torch(qv, qv_scale, qv_zero)
386
+ attn_output = torch.matmul(attn_weights, value)
387
+ else:
388
+ attn_output = torch.matmul(attn_weights, value)
389
+
390
+ attn_output = attn_output.transpose(1, 2)
391
+
392
+ return attn_output, attn_weights
393
+
394
+ def _split_heads(self, tensor, num_heads, attn_head_size):
395
+ new_shape = tensor.size()[:-1] + (num_heads, attn_head_size)
396
+ tensor = tensor.view(new_shape)
397
+ return tensor
398
+
399
+ def _merge_heads(self, tensor, num_heads, attn_head_size):
400
+ tensor = tensor.contiguous()
401
+ new_shape = tensor.size()[:-2] + (num_heads * attn_head_size,)
402
+ return tensor.view(new_shape)
403
+
404
+ def forward(
405
+ self,
406
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
407
+ rotary_pos_emb_list: Optional[List[List[torch.Tensor]]] = None,
408
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
409
+ attention_mask: Optional[torch.FloatTensor] = None,
410
+ head_mask: Optional[torch.FloatTensor] = None,
411
+ encoder_hidden_states: Optional[torch.Tensor] = None,
412
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
413
+ output_attentions: Optional[bool] = False,
414
+ use_cache: Optional[bool] = False,
415
+ ):
416
+ mixed_x_layer = self.c_attn(hidden_states)
417
+
418
+ query, key, value = mixed_x_layer.split(self.split_size, dim=2)
419
+
420
+ query = self._split_heads(query, self.num_heads, self.head_dim)
421
+ key = self._split_heads(key, self.num_heads, self.head_dim)
422
+ value = self._split_heads(value, self.num_heads, self.head_dim)
423
+
424
+ if rotary_pos_emb_list is not None:
425
+ cur_len = query.shape[1]
426
+ if len(rotary_pos_emb_list) == 1:
427
+ rotary_pos_emb = rotary_pos_emb_list[0]
428
+ rotary_pos_emb = [i[:, -cur_len:, :, :] for i in rotary_pos_emb]
429
+ rotary_pos_emb = (rotary_pos_emb,) * 2
430
+ q_pos_emb, k_pos_emb = rotary_pos_emb
431
+ # Slice the pos emb for current inference
432
+ query = apply_rotary_pos_emb(query, q_pos_emb)
433
+ key = apply_rotary_pos_emb(key, k_pos_emb)
434
+ else:
435
+ query_list = []
436
+ key_list = []
437
+ for i, rotary_pos_emb in enumerate(rotary_pos_emb_list):
438
+ rotary_pos_emb = [i[:, -cur_len:, :, :] for i in rotary_pos_emb]
439
+ rotary_pos_emb = (rotary_pos_emb,) * 2
440
+ q_pos_emb, k_pos_emb = rotary_pos_emb
441
+ # Slice the pos emb for current inference
442
+ query_list += [apply_rotary_pos_emb(query[i:i+1, :, :], q_pos_emb)]
443
+ key_list += [apply_rotary_pos_emb(key[i:i+1, :, :], k_pos_emb)]
444
+ query = torch.cat(query_list, dim=0)
445
+ key = torch.cat(key_list, dim=0)
446
+
447
+ if self.use_cache_quantization:
448
+ key = quantize_cache_v(key.permute(0, 2, 1, 3),
449
+ bits=8,
450
+ qmin=self.cache_qmin,
451
+ qmax=self.cache_qmax)
452
+ value = quantize_cache_v(value.permute(0, 2, 1, 3),
453
+ bits=8,
454
+ qmin=self.cache_qmin,
455
+ qmax=self.cache_qmax)
456
+
457
+
458
+ if layer_past is not None:
459
+ past_key, past_value = layer_past[0], layer_past[1]
460
+ if self.use_cache_quantization:
461
+ # use_cache_quantization:
462
+ # present=((q_key,key_scale,key_zero_point),
463
+ # (q_value,value_scale,value_zero_point))
464
+ key = (torch.cat((past_key[0], key[0]), dim=2),
465
+ torch.cat((past_key[1], key[1]), dim=2),
466
+ torch.cat((past_key[2], key[2]), dim=2))
467
+ value = (torch.cat((past_value[0], value[0]), dim=2),
468
+ torch.cat((past_value[1], value[1]), dim=2),
469
+ torch.cat((past_value[2], value[2]), dim=2))
470
+ else:
471
+ # not use_cache_quantization:
472
+ # present=(key,value)
473
+ key = torch.cat((past_key, key), dim=1)
474
+ value = torch.cat((past_value, value), dim=1)
475
+
476
+ if use_cache:
477
+ present = (key, value)
478
+ else:
479
+ present = None
480
+
481
+ key_size = key[0].size(2) if self.use_cache_quantization else key.size(1)
482
+ if key_size > self.seq_length and self.use_logn_attn and not self.training:
483
+ if self.use_cache_quantization:
484
+ seq_start = key[0].size(2) - query.size(1)
485
+ seq_end = key[0].size(2)
486
+ else:
487
+ seq_start = key.size(1) - query.size(1)
488
+ seq_end = key.size(1)
489
+ logn_tensor = self.logn_tensor[:, seq_start:seq_end, :, :].type_as(query)
490
+ query = query * logn_tensor.expand_as(query)
491
+
492
+ if (
493
+ self.use_flash_attn
494
+ and flash_attn_unpadded_func is not None
495
+ and not self.is_fp32
496
+ and query.is_cuda
497
+ ):
498
+ q, k, v = query, key, value
499
+ attn_output = self.core_attention_flash(q, k, v, attention_mask=attention_mask)
500
+ else:
501
+ key_size = key[0].size(2) if self.use_cache_quantization else key.size(1)
502
+ if query.size(1) == key_size:
503
+ causal_mask = torch.tril(
504
+ torch.ones((key_size, key_size), dtype=torch.bool, device=query.device)
505
+ ).view(1, 1, key_size, key_size)
506
+ else:
507
+ causal_mask = None
508
+ query = query.permute(0, 2, 1, 3)
509
+ if not self.use_cache_quantization:
510
+ key = key.permute(0, 2, 1, 3)
511
+ value = value.permute(0, 2, 1, 3)
512
+ if (
513
+ causal_mask is None
514
+ and self.use_flash_attn
515
+ and flash_attn_unpadded_func is not None
516
+ and not self.is_fp32
517
+ and not query.is_cuda
518
+ ):
519
+ raise Exception(_ERROR_INPUT_CPU_QUERY_WITH_FLASH_ATTN_ACTIVATED)
520
+
521
+ if not self.use_cache_quantization and SUPPORT_TORCH2:
522
+ if attention_mask is not None:
523
+ attention_mask = attention_mask.expand(
524
+ -1, -1, causal_mask.size(2), -1
525
+ )
526
+ if causal_mask is not None:
527
+ attention_mask = attention_mask.masked_fill(~causal_mask, torch.finfo(query.dtype).min)
528
+ else:
529
+ attention_mask = causal_mask
530
+ attn_output = F.scaled_dot_product_attention(
531
+ query, key, value, attn_mask=attention_mask
532
+ ).transpose(1, 2)
533
+ attn_weight = None
534
+ else:
535
+ attn_output, attn_weight = self._attn(
536
+ query, key, value, causal_mask, attention_mask, head_mask
537
+ )
538
+ context_layer = self._merge_heads(
539
+ attn_output, self.num_heads, self.head_dim
540
+ )
541
+
542
+ attn_output = self.c_proj(context_layer)
543
+
544
+ outputs = (attn_output, present)
545
+ if output_attentions:
546
+ if (
547
+ self.use_flash_attn
548
+ and flash_attn_unpadded_func is not None
549
+ and not self.is_fp32
550
+ ):
551
+ raise ValueError("Cannot output attentions while using flash-attn")
552
+ elif not self.use_cache_quantization and SUPPORT_TORCH2:
553
+ raise ValueError("Cannot output attentions while using scaled_dot_product_attention")
554
+ else:
555
+ outputs += (attn_weight,)
556
+
557
+ return outputs
558
+
559
+
560
+ class QWenMLP(nn.Module):
561
+ def __init__(self, config):
562
+ super().__init__()
563
+ self.w1 = nn.Linear(
564
+ config.hidden_size, config.intermediate_size // 2, bias=not config.no_bias
565
+ )
566
+ self.w2 = nn.Linear(
567
+ config.hidden_size, config.intermediate_size // 2, bias=not config.no_bias
568
+ )
569
+ ff_dim_in = config.intermediate_size // 2
570
+ self.c_proj = nn.Linear(ff_dim_in, config.hidden_size, bias=not config.no_bias)
571
+
572
+ def forward(self, hidden_states):
573
+ a1 = self.w1(hidden_states)
574
+ a2 = self.w2(hidden_states)
575
+ intermediate_parallel = a1 * F.silu(a2)
576
+ output = self.c_proj(intermediate_parallel)
577
+ return output
578
+
579
+
580
+ class QWenBlock(nn.Module):
581
+ def __init__(self, config):
582
+ super().__init__()
583
+ hidden_size = config.hidden_size
584
+ self.bf16 = config.bf16
585
+
586
+ self.ln_1 = RMSNorm(
587
+ hidden_size,
588
+ eps=config.layer_norm_epsilon,
589
+ )
590
+ self.attn = QWenAttention(config)
591
+ self.ln_2 = RMSNorm(
592
+ hidden_size,
593
+ eps=config.layer_norm_epsilon,
594
+ )
595
+
596
+ self.mlp = QWenMLP(config)
597
+
598
+ def forward(
599
+ self,
600
+ hidden_states: Optional[Tuple[torch.FloatTensor]],
601
+ rotary_pos_emb_list: Optional[List[List[torch.Tensor]]] = None,
602
+ layer_past: Optional[Tuple[torch.Tensor]] = None,
603
+ attention_mask: Optional[torch.FloatTensor] = None,
604
+ head_mask: Optional[torch.FloatTensor] = None,
605
+ encoder_hidden_states: Optional[torch.Tensor] = None,
606
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
607
+ use_cache: Optional[bool] = False,
608
+ output_attentions: Optional[bool] = False,
609
+ ):
610
+ layernorm_output = self.ln_1(hidden_states)
611
+
612
+ attn_outputs = self.attn(
613
+ layernorm_output,
614
+ rotary_pos_emb_list,
615
+ layer_past=layer_past,
616
+ attention_mask=attention_mask,
617
+ head_mask=head_mask,
618
+ use_cache=use_cache,
619
+ output_attentions=output_attentions,
620
+ )
621
+ attn_output = attn_outputs[0]
622
+
623
+ outputs = attn_outputs[1:]
624
+
625
+ residual = hidden_states
626
+ layernorm_input = attn_output + residual
627
+
628
+ layernorm_output = self.ln_2(layernorm_input)
629
+
630
+ residual = layernorm_input
631
+ mlp_output = self.mlp(layernorm_output)
632
+ hidden_states = residual + mlp_output
633
+
634
+ if use_cache:
635
+ outputs = (hidden_states,) + outputs
636
+ else:
637
+ outputs = (hidden_states,) + outputs[1:]
638
+
639
+ return outputs
640
+
641
+
642
+ class QWenPreTrainedModel(PreTrainedModel):
643
+ config_class = QWenConfig
644
+ base_model_prefix = "transformer"
645
+ is_parallelizable = False
646
+ supports_gradient_checkpointing = True
647
+ _no_split_modules = ["QWenBlock"]
648
+ _skip_keys_device_placement = "past_key_values"
649
+
650
+ def __init__(self, *inputs, **kwargs):
651
+ super().__init__(*inputs, **kwargs)
652
+
653
+ def _init_weights(self, module):
654
+ """Initialize the weights."""
655
+ if isinstance(module, nn.Linear):
656
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
657
+ if module.bias is not None:
658
+ module.bias.data.zero_()
659
+ elif isinstance(module, nn.Embedding):
660
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
661
+ if module.padding_idx is not None:
662
+ module.weight.data[module.padding_idx].zero_()
663
+ elif isinstance(module, RMSNorm):
664
+ module.weight.data.fill_(1.0)
665
+
666
+ for name, p in module.named_parameters():
667
+ if name == "c_proj.weight":
668
+ p.data.normal_(
669
+ mean=0.0,
670
+ std=(
671
+ self.config.initializer_range
672
+ / math.sqrt(2 * self.config.num_hidden_layers)
673
+ ),
674
+ )
675
+
676
+ def _set_gradient_checkpointing(self, module, value=False):
677
+ if isinstance(module, QWenModel):
678
+ module.gradient_checkpointing = value
679
+
680
+
681
+ class QWenModel(QWenPreTrainedModel):
682
+ _keys_to_ignore_on_load_missing = ["attn.masked_bias"]
683
+
684
+ def __init__(self, config):
685
+ super().__init__(config)
686
+ self.vocab_size = config.vocab_size
687
+ self.num_hidden_layers = config.num_hidden_layers
688
+ self.embed_dim = config.hidden_size
689
+ self.use_cache_quantization = self.config.use_cache_quantization if hasattr(self.config, 'use_cache_quantization') else False
690
+
691
+ self.gradient_checkpointing = False
692
+ self.use_dynamic_ntk = config.use_dynamic_ntk
693
+ self.seq_length = config.seq_length
694
+
695
+ self.wte = nn.Embedding(self.vocab_size, self.embed_dim)
696
+
697
+ self.drop = nn.Dropout(config.emb_dropout_prob)
698
+
699
+ if config.rotary_pct == 1.0:
700
+ self.rotary_ndims = None
701
+ else:
702
+ assert config.rotary_pct < 1
703
+ self.rotary_ndims = int(
704
+ config.kv_channels * config.rotary_pct
705
+ )
706
+ dim = (
707
+ self.rotary_ndims
708
+ if self.rotary_ndims is not None
709
+ else config.kv_channels
710
+ )
711
+ self.rotary_emb = RotaryEmbedding(dim, base=config.rotary_emb_base)
712
+
713
+ self.use_flash_attn = config.use_flash_attn
714
+ self.is_fp32 = not (config.bf16 or config.fp16)
715
+
716
+ self.h = nn.ModuleList(
717
+ [
718
+ QWenBlock(
719
+ config
720
+ )
721
+ for i in range(config.num_hidden_layers)
722
+ ]
723
+ )
724
+ self.ln_f = RMSNorm(
725
+ self.embed_dim,
726
+ eps=config.layer_norm_epsilon,
727
+ )
728
+
729
+ self.post_init()
730
+
731
+ def get_input_embeddings(self):
732
+ return self.wte
733
+
734
+ def set_input_embeddings(self, new_embeddings):
735
+ self.wte = new_embeddings
736
+
737
+ def get_ntk_alpha(self, true_seq_len):
738
+ context_value = math.log(true_seq_len / self.seq_length, 2) + 1
739
+ ntk_alpha = 2 ** math.ceil(context_value) - 1
740
+ ntk_alpha = max(ntk_alpha, 1)
741
+ return ntk_alpha
742
+
743
+ def forward(
744
+ self,
745
+ input_ids: Optional[torch.LongTensor] = None,
746
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
747
+ attention_mask: Optional[torch.FloatTensor] = None,
748
+ token_type_ids: Optional[torch.LongTensor] = None,
749
+ position_ids: Optional[torch.LongTensor] = None,
750
+ head_mask: Optional[torch.FloatTensor] = None,
751
+ inputs_embeds: Optional[torch.FloatTensor] = None,
752
+ encoder_hidden_states: Optional[torch.Tensor] = None,
753
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
754
+ use_cache: Optional[bool] = None,
755
+ output_attentions: Optional[bool] = None,
756
+ output_hidden_states: Optional[bool] = None,
757
+ return_dict: Optional[bool] = None,
758
+ ):
759
+ output_attentions = (
760
+ output_attentions
761
+ if output_attentions is not None
762
+ else self.config.output_attentions
763
+ )
764
+ output_hidden_states = (
765
+ output_hidden_states
766
+ if output_hidden_states is not None
767
+ else self.config.output_hidden_states
768
+ )
769
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
770
+ return_dict = (
771
+ return_dict if return_dict is not None else self.config.use_return_dict
772
+ )
773
+
774
+ if input_ids is not None and inputs_embeds is not None:
775
+ raise ValueError(
776
+ "You cannot specify both input_ids and inputs_embeds at the same time"
777
+ )
778
+ elif input_ids is not None:
779
+ input_shape = input_ids.size()
780
+ input_ids = input_ids.view(-1, input_shape[-1])
781
+ batch_size = input_ids.shape[0]
782
+ elif inputs_embeds is not None:
783
+ input_shape = inputs_embeds.size()[:-1]
784
+ batch_size = inputs_embeds.shape[0]
785
+ else:
786
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
787
+
788
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
789
+
790
+ if token_type_ids is not None:
791
+ token_type_ids = token_type_ids.view(-1, input_shape[-1])
792
+ if position_ids is not None:
793
+ position_ids = position_ids.view(-1, input_shape[-1])
794
+
795
+ if past_key_values is None:
796
+ past_length = 0
797
+ past_key_values = tuple([None] * len(self.h))
798
+ else:
799
+ if self.use_cache_quantization:
800
+ past_length = past_key_values[0][0][0].size(2)
801
+ else:
802
+ past_length = past_key_values[0][0].size(-2)
803
+ if position_ids is None:
804
+ position_ids = torch.arange(
805
+ past_length,
806
+ input_shape[-1] + past_length,
807
+ dtype=torch.long,
808
+ device=device,
809
+ )
810
+ position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
811
+
812
+ if attention_mask is not None:
813
+ if batch_size <= 0:
814
+ raise ValueError("batch_size has to be defined and > 0")
815
+ attention_mask = attention_mask.view(batch_size, -1)
816
+ attention_mask = attention_mask[:, None, None, :]
817
+ attention_mask = attention_mask.to(dtype=self.dtype)
818
+ attention_mask = (1.0 - attention_mask) * torch.finfo(self.dtype).min
819
+
820
+ encoder_attention_mask = None
821
+ head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
822
+
823
+ if inputs_embeds is None:
824
+ inputs_embeds = self.wte(input_ids)
825
+ hidden_states = inputs_embeds
826
+
827
+ kv_seq_len = hidden_states.size()[1]
828
+ if past_key_values[0] is not None:
829
+ # past key values[0][0] shape: bs * seq_len * head_num * dim
830
+ if self.use_cache_quantization:
831
+ kv_seq_len += past_key_values[0][0][0].shape[2]
832
+ else:
833
+ kv_seq_len += past_key_values[0][0].shape[1]
834
+
835
+ if self.training or not self.use_dynamic_ntk:
836
+ ntk_alpha_list = [1.0]
837
+ elif kv_seq_len != hidden_states.size()[1]:
838
+ ntk_alpha_list = self.rotary_emb._ntk_alpha_cached_list
839
+ else:
840
+ ntk_alpha_list = []
841
+ if attention_mask is not None and kv_seq_len > self.seq_length:
842
+ true_seq_lens = attention_mask.squeeze(1).squeeze(1).eq(0).sum(dim=-1, dtype=torch.int32)
843
+ for i in range(hidden_states.size()[0]):
844
+ true_seq_len = true_seq_lens[i].item()
845
+ ntk_alpha = self.get_ntk_alpha(true_seq_len)
846
+ ntk_alpha_list.append(ntk_alpha)
847
+ else:
848
+ ntk_alpha = self.get_ntk_alpha(kv_seq_len)
849
+ ntk_alpha_list.append(ntk_alpha)
850
+ self.rotary_emb._ntk_alpha_cached_list = ntk_alpha_list
851
+ rotary_pos_emb_list = [
852
+ self.rotary_emb(kv_seq_len, ntk_alpha=ntk_alpha) for ntk_alpha in ntk_alpha_list
853
+ ]
854
+
855
+ hidden_states = self.drop(hidden_states)
856
+ output_shape = input_shape + (hidden_states.size(-1),)
857
+
858
+ if self.gradient_checkpointing and self.training:
859
+ if use_cache:
860
+ logger.warning_once(
861
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
862
+ )
863
+ use_cache = False
864
+
865
+ presents = () if use_cache else None
866
+ all_self_attentions = () if output_attentions else None
867
+ all_hidden_states = () if output_hidden_states else None
868
+ for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
869
+
870
+ if output_hidden_states:
871
+ all_hidden_states = all_hidden_states + (hidden_states,)
872
+
873
+ if self.gradient_checkpointing and self.training:
874
+
875
+ def create_custom_forward(module):
876
+ def custom_forward(*inputs):
877
+ # None for past_key_value
878
+ return module(*inputs, use_cache, output_attentions)
879
+
880
+ return custom_forward
881
+
882
+ outputs = torch.utils.checkpoint.checkpoint(
883
+ create_custom_forward(block),
884
+ hidden_states,
885
+ rotary_pos_emb_list,
886
+ None,
887
+ attention_mask,
888
+ head_mask[i],
889
+ encoder_hidden_states,
890
+ encoder_attention_mask,
891
+ )
892
+ else:
893
+ outputs = block(
894
+ hidden_states,
895
+ layer_past=layer_past,
896
+ rotary_pos_emb_list=rotary_pos_emb_list,
897
+ attention_mask=attention_mask,
898
+ head_mask=head_mask[i],
899
+ encoder_hidden_states=encoder_hidden_states,
900
+ encoder_attention_mask=encoder_attention_mask,
901
+ use_cache=use_cache,
902
+ output_attentions=output_attentions,
903
+ )
904
+
905
+ hidden_states = outputs[0]
906
+ if use_cache is True:
907
+ presents = presents + (outputs[1],)
908
+
909
+ if output_attentions:
910
+ all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)
911
+
912
+ hidden_states = self.ln_f(hidden_states)
913
+ hidden_states = hidden_states.view(output_shape)
914
+ # Add last hidden state
915
+ if output_hidden_states:
916
+ all_hidden_states = all_hidden_states + (hidden_states,)
917
+
918
+ if not return_dict:
919
+ return tuple(
920
+ v for v in [hidden_states, presents, all_hidden_states] if v is not None
921
+ )
922
+
923
+ return BaseModelOutputWithPast(
924
+ last_hidden_state=hidden_states,
925
+ past_key_values=presents,
926
+ hidden_states=all_hidden_states,
927
+ attentions=all_self_attentions,
928
+ )
929
+
930
+
931
+ class QWenLMHeadModel(QWenPreTrainedModel):
932
+ _keys_to_ignore_on_load_missing = [r"h\.\d+\.attn\.rotary_emb\.inv_freq"]
933
+ _keys_to_ignore_on_load_unexpected = [r"h\.\d+\.attn\.masked_bias"]
934
+
935
+ def __init__(self, config):
936
+ super().__init__(config)
937
+ assert (
938
+ config.bf16 + config.fp16 + config.fp32 <= 1
939
+ ), "Only one of \"bf16\", \"fp16\", \"fp32\" can be true"
940
+
941
+ autoset_precision = config.bf16 + config.fp16 + config.fp32 == 0
942
+
943
+ if autoset_precision:
944
+ if SUPPORT_BF16:
945
+ logger.warn(
946
+ "The model is automatically converting to bf16 for faster inference. "
947
+ "If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\"."
948
+ )
949
+ config.bf16 = True
950
+ elif SUPPORT_FP16:
951
+ logger.warn(
952
+ "The model is automatically converting to fp16 for faster inference. "
953
+ "If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to \"AutoModelForCausalLM.from_pretrained\"."
954
+ )
955
+ config.fp16 = True
956
+ else:
957
+ config.fp32 = True
958
+
959
+ if config.bf16 and SUPPORT_CUDA and not SUPPORT_BF16:
960
+ logger.warn("Your device does NOT seem to support bf16, you can switch to fp16 or fp32 by by passing fp16/fp32=True in \"AutoModelForCausalLM.from_pretrained\".")
961
+ if config.fp16 and SUPPORT_CUDA and not SUPPORT_FP16:
962
+ logger.warn("Your device does NOT support faster inference with fp16, please switch to fp32 which is likely to be faster")
963
+ if config.fp32:
964
+ if SUPPORT_BF16:
965
+ logger.warn("Your device support faster inference by passing bf16=True in \"AutoModelForCausalLM.from_pretrained\".")
966
+ elif SUPPORT_FP16:
967
+ logger.warn("Your device support faster inference by passing fp16=True in \"AutoModelForCausalLM.from_pretrained\".")
968
+
969
+ if config.use_flash_attn == "auto":
970
+ if config.bf16 or config.fp16:
971
+ logger.warn("Try importing flash-attention for faster inference...")
972
+ config.use_flash_attn = True
973
+ else:
974
+ config.use_flash_attn = False
975
+ if config.use_flash_attn and config.fp32:
976
+ logger.warn("Flash attention will be disabled because it does NOT support fp32.")
977
+
978
+ if config.use_flash_attn:
979
+ _import_flash_attn()
980
+
981
+ self.transformer = QWenModel(config)
982
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
983
+
984
+ if config.bf16:
985
+ self.transformer.bfloat16()
986
+ self.lm_head.bfloat16()
987
+ if config.fp16:
988
+ self.transformer.half()
989
+ self.lm_head.half()
990
+ self.post_init()
991
+
992
+ def get_output_embeddings(self):
993
+ return self.lm_head
994
+
995
+ def set_output_embeddings(self, new_embeddings):
996
+ self.lm_head = new_embeddings
997
+
998
+ def prepare_inputs_for_generation(
999
+ self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs
1000
+ ):
1001
+ if past_key_values:
1002
+ input_ids = input_ids[:, -1].unsqueeze(-1)
1003
+
1004
+ if input_ids.size(0) == 1:
1005
+ attention_mask = None
1006
+ else:
1007
+ attention_mask = kwargs.get("attention_mask", None)
1008
+
1009
+ if inputs_embeds is not None and past_key_values is None:
1010
+ model_inputs = {"inputs_embeds": inputs_embeds}
1011
+ else:
1012
+ model_inputs = {"input_ids": input_ids}
1013
+
1014
+ model_inputs.update(
1015
+ {
1016
+ "past_key_values": past_key_values,
1017
+ "use_cache": kwargs.get("use_cache"),
1018
+ "attention_mask": attention_mask,
1019
+ }
1020
+ )
1021
+ return model_inputs
1022
+
1023
+ def forward(
1024
+ self,
1025
+ input_ids: Optional[torch.LongTensor] = None,
1026
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1027
+ attention_mask: Optional[torch.FloatTensor] = None,
1028
+ token_type_ids: Optional[torch.LongTensor] = None,
1029
+ position_ids: Optional[torch.LongTensor] = None,
1030
+ head_mask: Optional[torch.FloatTensor] = None,
1031
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1032
+ encoder_hidden_states: Optional[torch.Tensor] = None,
1033
+ encoder_attention_mask: Optional[torch.FloatTensor] = None,
1034
+ labels: Optional[torch.LongTensor] = None,
1035
+ use_cache: Optional[bool] = None,
1036
+ output_attentions: Optional[bool] = None,
1037
+ output_hidden_states: Optional[bool] = None,
1038
+ return_dict: Optional[bool] = None,
1039
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1040
+
1041
+ return_dict = (
1042
+ return_dict if return_dict is not None else self.config.use_return_dict
1043
+ )
1044
+
1045
+ transformer_outputs = self.transformer(
1046
+ input_ids,
1047
+ past_key_values=past_key_values,
1048
+ attention_mask=attention_mask,
1049
+ token_type_ids=token_type_ids,
1050
+ position_ids=position_ids,
1051
+ head_mask=head_mask,
1052
+ inputs_embeds=inputs_embeds,
1053
+ encoder_hidden_states=encoder_hidden_states,
1054
+ encoder_attention_mask=encoder_attention_mask,
1055
+ use_cache=use_cache,
1056
+ output_attentions=output_attentions,
1057
+ output_hidden_states=output_hidden_states,
1058
+ return_dict=return_dict,
1059
+ )
1060
+ hidden_states = transformer_outputs[0]
1061
+
1062
+ lm_logits = self.lm_head(hidden_states)
1063
+
1064
+ loss = None
1065
+ if labels is not None:
1066
+ labels = labels.to(lm_logits.device)
1067
+ shift_logits = lm_logits[..., :-1, :].contiguous()
1068
+ shift_labels = labels[..., 1:].contiguous()
1069
+ loss_fct = CrossEntropyLoss()
1070
+ loss = loss_fct(
1071
+ shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
1072
+ )
1073
+
1074
+ if not return_dict:
1075
+ output = (lm_logits,) + transformer_outputs[1:]
1076
+ return ((loss,) + output) if loss is not None else output
1077
+
1078
+ return CausalLMOutputWithPast(
1079
+ loss=loss,
1080
+ logits=lm_logits,
1081
+ past_key_values=transformer_outputs.past_key_values,
1082
+ hidden_states=transformer_outputs.hidden_states,
1083
+ attentions=transformer_outputs.attentions,
1084
+ )
1085
+
1086
+ @staticmethod
1087
+ def _reorder_cache(
1088
+ past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
1089
+ ) -> Tuple[Tuple[torch.Tensor]]:
1090
+
1091
+ return tuple(
1092
+ tuple(
1093
+ past_state.index_select(0, beam_idx.to(past_state.device))
1094
+ for past_state in layer_past
1095
+ )
1096
+ for layer_past in past_key_values
1097
+ )
1098
+
1099
+ def chat(
1100
+ self,
1101
+ tokenizer: PreTrainedTokenizer,
1102
+ query: str,
1103
+ history: Optional[HistoryType],
1104
+ system: str = "You are a helpful assistant.",
1105
+ stream: Optional[bool] = _SENTINEL,
1106
+ stop_words_ids: Optional[List[List[int]]] = None,
1107
+ generation_config: Optional[GenerationConfig] = None,
1108
+ **kwargs,
1109
+ ) -> Tuple[str, HistoryType]:
1110
+ generation_config = generation_config if generation_config is not None else self.generation_config
1111
+
1112
+ assert stream is _SENTINEL, _ERROR_STREAM_IN_CHAT
1113
+ assert generation_config.chat_format == 'chatml', _ERROR_BAD_CHAT_FORMAT
1114
+ if history is None:
1115
+ history = []
1116
+ else:
1117
+ # make a copy of the user's input such that is is left untouched
1118
+ history = copy.deepcopy(history)
1119
+
1120
+ if stop_words_ids is None:
1121
+ stop_words_ids = []
1122
+
1123
+ max_window_size = kwargs.get('max_window_size', None)
1124
+ if max_window_size is None:
1125
+ max_window_size = generation_config.max_window_size
1126
+ raw_text, context_tokens = make_context(
1127
+ tokenizer,
1128
+ query,
1129
+ history=history,
1130
+ system=system,
1131
+ max_window_size=max_window_size,
1132
+ chat_format=generation_config.chat_format,
1133
+ )
1134
+
1135
+ stop_words_ids.extend(get_stop_words_ids(
1136
+ generation_config.chat_format, tokenizer
1137
+ ))
1138
+ input_ids = torch.tensor([context_tokens]).to(self.device)
1139
+ outputs = self.generate(
1140
+ input_ids,
1141
+ stop_words_ids=stop_words_ids,
1142
+ return_dict_in_generate=False,
1143
+ generation_config=generation_config,
1144
+ **kwargs,
1145
+ )
1146
+
1147
+ response = decode_tokens(
1148
+ outputs[0],
1149
+ tokenizer,
1150
+ raw_text_len=len(raw_text),
1151
+ context_length=len(context_tokens),
1152
+ chat_format=generation_config.chat_format,
1153
+ verbose=False,
1154
+ errors='replace'
1155
+ )
1156
+
1157
+ # as history is a copy of the user inputs,
1158
+ # we can always return the new turn to the user.
1159
+ # separating input history and output history also enables the user
1160
+ # to implement more complex history management
1161
+ history.append((query, response))
1162
+
1163
+ return response, history
1164
+
1165
+ def chat_stream(
1166
+ self,
1167
+ tokenizer: PreTrainedTokenizer,
1168
+ query: str,
1169
+ history: Optional[HistoryType],
1170
+ system: str = "You are a helpful assistant.",
1171
+ stop_words_ids: Optional[List[List[int]]] = None,
1172
+ logits_processor: Optional[LogitsProcessorList] = None,
1173
+ generation_config: Optional[GenerationConfig] = None,
1174
+ **kwargs,
1175
+ ) -> Generator[str, Any, None]:
1176
+ generation_config = generation_config if generation_config is not None else self.generation_config
1177
+ assert generation_config.chat_format == 'chatml', _ERROR_BAD_CHAT_FORMAT
1178
+ if history is None:
1179
+ history = []
1180
+ if stop_words_ids is None:
1181
+ stop_words_ids = []
1182
+
1183
+ max_window_size = kwargs.get('max_window_size', None)
1184
+ if max_window_size is None:
1185
+ max_window_size = generation_config.max_window_size
1186
+ raw_text, context_tokens = make_context(
1187
+ tokenizer,
1188
+ query,
1189
+ history=history,
1190
+ system=system,
1191
+ max_window_size=max_window_size,
1192
+ chat_format=generation_config.chat_format,
1193
+ )
1194
+
1195
+ stop_words_ids.extend(get_stop_words_ids(
1196
+ generation_config.chat_format, tokenizer
1197
+ ))
1198
+ if stop_words_ids is not None:
1199
+ stop_words_logits_processor = StopWordsLogitsProcessor(
1200
+ stop_words_ids=stop_words_ids,
1201
+ eos_token_id=generation_config.eos_token_id,
1202
+ )
1203
+ if logits_processor is None:
1204
+ logits_processor = LogitsProcessorList([stop_words_logits_processor])
1205
+ else:
1206
+ logits_processor.append(stop_words_logits_processor)
1207
+ input_ids = torch.tensor([context_tokens]).to(self.device)
1208
+
1209
+ from transformers_stream_generator.main import NewGenerationMixin, StreamGenerationConfig
1210
+ self.__class__.generate_stream = NewGenerationMixin.generate
1211
+ self.__class__.sample_stream = NewGenerationMixin.sample_stream
1212
+ stream_config = StreamGenerationConfig(**generation_config.to_dict(), do_stream=True)
1213
+
1214
+ def stream_generator():
1215
+ outputs = []
1216
+ for token in self.generate_stream(
1217
+ input_ids,
1218
+ return_dict_in_generate=False,
1219
+ generation_config=stream_config,
1220
+ logits_processor=logits_processor,
1221
+ seed=-1,
1222
+ **kwargs):
1223
+ outputs.append(token.item())
1224
+ yield tokenizer.decode(outputs, skip_special_tokens=True, errors='ignore')
1225
+
1226
+ return stream_generator()
1227
+
1228
+ def generate(
1229
+ self,
1230
+ inputs: Optional[torch.Tensor] = None,
1231
+ generation_config: Optional[GenerationConfig] = None,
1232
+ logits_processor: Optional[LogitsProcessorList] = None,
1233
+ stopping_criteria: Optional[StoppingCriteriaList] = None,
1234
+ prefix_allowed_tokens_fn: Optional[
1235
+ Callable[[int, torch.Tensor], List[int]]
1236
+ ] = None,
1237
+ synced_gpus: Optional[bool] = None,
1238
+ assistant_model: Optional["PreTrainedModel"] = None,
1239
+ streamer: Optional["BaseStreamer"] = None,
1240
+ **kwargs,
1241
+ ) -> Union[GenerateOutput, torch.LongTensor]:
1242
+ generation_config = generation_config if generation_config is not None else self.generation_config
1243
+
1244
+ # Process stop_words_ids.
1245
+ stop_words_ids = kwargs.pop("stop_words_ids", None)
1246
+ if stop_words_ids is None and generation_config is not None:
1247
+ stop_words_ids = getattr(generation_config, "stop_words_ids", None)
1248
+ if stop_words_ids is None:
1249
+ stop_words_ids = getattr(generation_config, "stop_words_ids", None)
1250
+
1251
+ if stop_words_ids is not None:
1252
+ stop_words_logits_processor = StopWordsLogitsProcessor(
1253
+ stop_words_ids=stop_words_ids,
1254
+ eos_token_id=generation_config.eos_token_id,
1255
+ )
1256
+ if logits_processor is None:
1257
+ logits_processor = LogitsProcessorList([stop_words_logits_processor])
1258
+ else:
1259
+ logits_processor.append(stop_words_logits_processor)
1260
+
1261
+ return super().generate(
1262
+ inputs,
1263
+ generation_config=generation_config,
1264
+ logits_processor=logits_processor,
1265
+ stopping_criteria=stopping_criteria,
1266
+ prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
1267
+ synced_gpus=synced_gpus,
1268
+ assistant_model=assistant_model,
1269
+ streamer=streamer,
1270
+ **kwargs,
1271
+ )
1272
+
1273
+
1274
+ class RotaryEmbedding(torch.nn.Module):
1275
+ def __init__(self, dim, base=10000):
1276
+ super().__init__()
1277
+ self.dim = dim
1278
+ self.base = base
1279
+ inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float() / dim))
1280
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
1281
+ if importlib.util.find_spec("einops") is None:
1282
+ raise RuntimeError("einops is required for Rotary Embedding")
1283
+
1284
+ self._rotary_pos_emb_cache = None
1285
+ self._seq_len_cached = 0
1286
+ self._ntk_alpha_cached = 1.0
1287
+ self._ntk_alpha_cached_list = [1.0]
1288
+
1289
+ def update_rotary_pos_emb_cache(self, seqlen, ntk_alpha=1.0):
1290
+ if seqlen > self._seq_len_cached or ntk_alpha != self._ntk_alpha_cached:
1291
+ base = self.base * ntk_alpha ** (self.dim / (self.dim - 2))
1292
+ self.inv_freq = 1.0 / (
1293
+ base
1294
+ ** (
1295
+ torch.arange(0, self.dim, 2, device=self.inv_freq.device).float()
1296
+ / self.dim
1297
+ )
1298
+ )
1299
+ self._seq_len_cached = max(2 * seqlen, 16)
1300
+ self._ntk_alpha_cached = ntk_alpha
1301
+ seq = torch.arange(self._seq_len_cached, device=self.inv_freq.device)
1302
+ freqs = torch.outer(seq.type_as(self.inv_freq), self.inv_freq)
1303
+
1304
+ emb = torch.cat((freqs, freqs), dim=-1)
1305
+ from einops import rearrange
1306
+
1307
+ emb = rearrange(emb, "n d -> 1 n 1 d")
1308
+
1309
+ cos, sin = emb.cos(), emb.sin()
1310
+ self._rotary_pos_emb_cache = [cos, sin]
1311
+
1312
+ def forward(self, max_seq_len, ntk_alpha=1.0):
1313
+ self.update_rotary_pos_emb_cache(max_seq_len, ntk_alpha)
1314
+ cos, sin = self._rotary_pos_emb_cache
1315
+ return [cos[:, :max_seq_len], sin[:, :max_seq_len]]
1316
+
1317
+
1318
+ def _rotate_half(x):
1319
+ from einops import rearrange
1320
+
1321
+ x = rearrange(x, "... (j d) -> ... j d", j=2)
1322
+ x1, x2 = x.unbind(dim=-2)
1323
+ return torch.cat((-x2, x1), dim=-1)
1324
+
1325
+
1326
+ def apply_rotary_pos_emb(t, freqs):
1327
+ """ Apply rotary embedding to the first rotary_dim of the iput
1328
+
1329
+ Arguments:
1330
+ t (tensor(batch_size, seq_len, n_head, head_dim)):
1331
+ the input embedding/hidden states
1332
+ freqs (list[tensor(1, seq_len, 1, rotary_dim), tensor(1, seq_len, 1, rotary_dim)]):
1333
+ the cached cos/sin position embeddings
1334
+ """
1335
+ rot_dim = freqs[0].shape[-1]
1336
+ cos, sin = freqs
1337
+ t_float = t.float()
1338
+ if apply_rotary_emb_func is not None and t.is_cuda:
1339
+ # apply_rotary_emb in flash_attn requires cos/sin to be of
1340
+ # shape (seqlen, rotary_dim / 2) and apply rotary embedding
1341
+ # to the first rotary_dim of the input
1342
+ cos = cos.squeeze(0).squeeze(1)[:, : rot_dim // 2]
1343
+ sin = sin.squeeze(0).squeeze(1)[:, : rot_dim // 2]
1344
+ return apply_rotary_emb_func(t_float, cos, sin).type_as(t)
1345
+ else:
1346
+ t_rot, t_pass = t_float[..., :rot_dim], t_float[..., rot_dim:]
1347
+ t_rot = (t_rot * cos) + (_rotate_half(t_rot) * sin)
1348
+ return torch.cat((t_rot, t_pass), dim=-1).type_as(t)
1349
+
1350
+
1351
+ class RMSNorm(torch.nn.Module):
1352
+ def __init__(self, dim: int, eps: float = 1e-6):
1353
+ super().__init__()
1354
+ self.eps = eps
1355
+ self.weight = nn.Parameter(torch.ones(dim))
1356
+
1357
+ def _norm(self, x):
1358
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
1359
+
1360
+ def forward(self, x):
1361
+ if rms_norm is not None and x.is_cuda:
1362
+ return rms_norm(x, self.weight, self.eps)
1363
+ else:
1364
+ output = self._norm(x.float()).type_as(x)
1365
+ return output * self.weight
qwen.tiktoken ADDED
The diff for this file is too large to render. See raw diff
 
qwen_generation_utils.py ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Generation support."""
7
+
8
+ from typing import Tuple, List, Union, Iterable
9
+
10
+ import numpy as np
11
+ import torch
12
+ import torch.nn.functional as F
13
+ from transformers import PreTrainedTokenizer
14
+ from transformers import logging
15
+ from transformers.generation import LogitsProcessor
16
+
17
+ logger = logging.get_logger(__name__)
18
+
19
+ # Types.
20
+ HistoryType = List[Tuple[str, str]]
21
+ TokensType = List[int]
22
+ BatchTokensType = List[List[int]]
23
+
24
+
25
+ def pad_batch(batch: BatchTokensType, pad_id: int, seq_length: int) -> BatchTokensType:
26
+ for tokens in batch:
27
+ context_length = len(tokens)
28
+ if context_length < seq_length:
29
+ tokens.extend([pad_id] * (seq_length - context_length))
30
+ return batch
31
+
32
+
33
+ def get_ltor_masks_and_position_ids(
34
+ data,
35
+ eod_token,
36
+ reset_position_ids,
37
+ reset_attention_mask,
38
+ eod_mask_loss,
39
+ ):
40
+ """Build masks and position id for left to right model."""
41
+
42
+ # Extract batch size and sequence length.
43
+ micro_batch_size, seq_length = data.size()
44
+
45
+ # Attention mask (lower triangular).
46
+ if reset_attention_mask:
47
+ att_mask_batch = micro_batch_size
48
+ else:
49
+ att_mask_batch = 1
50
+ attention_mask = torch.tril(
51
+ torch.ones((att_mask_batch, seq_length, seq_length), device=data.device)
52
+ ).view(att_mask_batch, 1, seq_length, seq_length)
53
+
54
+ # Loss mask.
55
+ loss_mask = torch.ones(data.size(), dtype=torch.float, device=data.device)
56
+ if eod_mask_loss:
57
+ loss_mask[data == eod_token] = 0.0
58
+
59
+ # Position ids.
60
+ position_ids = torch.arange(seq_length, dtype=torch.long, device=data.device)
61
+ position_ids = position_ids.unsqueeze(0).expand_as(data)
62
+ # We need to clone as the ids will be modifed based on batch index.
63
+ if reset_position_ids:
64
+ position_ids = position_ids.clone()
65
+
66
+ if reset_position_ids or reset_attention_mask:
67
+ # Loop through the batches:
68
+ for b in range(micro_batch_size):
69
+
70
+ # Find indecies where EOD token is.
71
+ eod_index = position_ids[b, data[b] == eod_token]
72
+ # Detach indecies from positions if going to modify positions.
73
+ if reset_position_ids:
74
+ eod_index = eod_index.clone()
75
+
76
+ # Loop through EOD indecies:
77
+ prev_index = 0
78
+ for j in range(eod_index.size()[0]):
79
+ i = eod_index[j]
80
+ # Mask attention loss.
81
+ if reset_attention_mask:
82
+ attention_mask[b, 0, (i + 1) :, : (i + 1)] = 0
83
+ # Reset positions.
84
+ if reset_position_ids:
85
+ position_ids[b, (i + 1) :] -= i + 1 - prev_index
86
+ prev_index = i + 1
87
+
88
+ # Convert attention mask to binary:
89
+ attention_mask = attention_mask < 0.5
90
+
91
+ return attention_mask, loss_mask, position_ids
92
+
93
+
94
+ def get_batch(context_tokens: torch.LongTensor, eod_id: int):
95
+ """Generate batch from context tokens."""
96
+ # Move to GPU.
97
+ tokens = context_tokens.contiguous().to(context_tokens.device)
98
+ # Get the attention mask and postition ids.
99
+ attention_mask, _, position_ids = get_ltor_masks_and_position_ids(
100
+ tokens,
101
+ eod_id,
102
+ reset_position_ids=False,
103
+ reset_attention_mask=False,
104
+ eod_mask_loss=False,
105
+ )
106
+ return tokens, attention_mask, position_ids
107
+
108
+
109
+ def get_stop_words_ids(chat_format, tokenizer):
110
+ if chat_format == "raw":
111
+ stop_words_ids = [tokenizer.encode("Human:"), [tokenizer.eod_id]]
112
+ elif chat_format == "chatml":
113
+ stop_words_ids = [[tokenizer.im_end_id], [tokenizer.im_start_id]]
114
+ else:
115
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
116
+ return stop_words_ids
117
+
118
+
119
+ def make_context(
120
+ tokenizer: PreTrainedTokenizer,
121
+ query: str,
122
+ history: List[Tuple[str, str]] = None,
123
+ system: str = "",
124
+ max_window_size: int = 6144,
125
+ chat_format: str = "chatml",
126
+ ):
127
+ if history is None:
128
+ history = []
129
+
130
+ if chat_format == "chatml":
131
+ im_start, im_end = "<|im_start|>", "<|im_end|>"
132
+ im_start_tokens = [tokenizer.im_start_id]
133
+ im_end_tokens = [tokenizer.im_end_id]
134
+ nl_tokens = tokenizer.encode("\n")
135
+
136
+ def _tokenize_str(role, content):
137
+ return f"{role}\n{content}", tokenizer.encode(
138
+ role, allowed_special=set()
139
+ ) + nl_tokens + tokenizer.encode(content, allowed_special=set())
140
+
141
+ system_text, system_tokens_part = _tokenize_str("system", system)
142
+ system_tokens = im_start_tokens + system_tokens_part + im_end_tokens
143
+
144
+ raw_text = ""
145
+ context_tokens = []
146
+
147
+ for turn_query, turn_response in reversed(history):
148
+ query_text, query_tokens_part = _tokenize_str("user", turn_query)
149
+ query_tokens = im_start_tokens + query_tokens_part + im_end_tokens
150
+ response_text, response_tokens_part = _tokenize_str(
151
+ "assistant", turn_response
152
+ )
153
+ response_tokens = im_start_tokens + response_tokens_part + im_end_tokens
154
+
155
+ next_context_tokens = nl_tokens + query_tokens + nl_tokens + response_tokens
156
+ prev_chat = (
157
+ f"\n{im_start}{query_text}{im_end}\n{im_start}{response_text}{im_end}"
158
+ )
159
+
160
+ current_context_size = (
161
+ len(system_tokens) + len(next_context_tokens) + len(context_tokens)
162
+ )
163
+ if current_context_size < max_window_size:
164
+ context_tokens = next_context_tokens + context_tokens
165
+ raw_text = prev_chat + raw_text
166
+ else:
167
+ break
168
+
169
+ context_tokens = system_tokens + context_tokens
170
+ raw_text = f"{im_start}{system_text}{im_end}" + raw_text
171
+ context_tokens += (
172
+ nl_tokens
173
+ + im_start_tokens
174
+ + _tokenize_str("user", query)[1]
175
+ + im_end_tokens
176
+ + nl_tokens
177
+ + im_start_tokens
178
+ + tokenizer.encode("assistant")
179
+ + nl_tokens
180
+ )
181
+ raw_text += f"\n{im_start}user\n{query}{im_end}\n{im_start}assistant\n"
182
+
183
+ elif chat_format == "raw":
184
+ raw_text = query
185
+ context_tokens = tokenizer.encode(raw_text)
186
+ else:
187
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
188
+
189
+ return raw_text, context_tokens
190
+
191
+
192
+ def _decode_default(
193
+ tokens: List[int],
194
+ *,
195
+ stop_words: List[str],
196
+ eod_words: List[str],
197
+ tokenizer: PreTrainedTokenizer,
198
+ raw_text_len: int,
199
+ verbose: bool = False,
200
+ return_end_reason: bool = False,
201
+ errors: str='replace',
202
+ ):
203
+ trim_decode_tokens = tokenizer.decode(tokens, errors=errors)[raw_text_len:]
204
+ if verbose:
205
+ print("\nRaw Generate: ", trim_decode_tokens)
206
+
207
+ end_reason = f"Gen length {len(tokens)}"
208
+ for stop_word in stop_words:
209
+ trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
210
+ for eod_word in eod_words:
211
+ if eod_word in trim_decode_tokens:
212
+ end_reason = f"Gen {eod_word!r}"
213
+ trim_decode_tokens = trim_decode_tokens.split(eod_word)[0]
214
+ trim_decode_tokens = trim_decode_tokens.strip()
215
+ if verbose:
216
+ print("\nEnd Reason:", end_reason)
217
+ print("\nGenerate: ", trim_decode_tokens)
218
+
219
+ if return_end_reason:
220
+ return trim_decode_tokens, end_reason
221
+ else:
222
+ return trim_decode_tokens
223
+
224
+
225
+ def _decode_chatml(
226
+ tokens: List[int],
227
+ *,
228
+ stop_words: List[str],
229
+ eod_token_ids: List[int],
230
+ tokenizer: PreTrainedTokenizer,
231
+ raw_text_len: int,
232
+ context_length: int,
233
+ verbose: bool = False,
234
+ return_end_reason: bool = False,
235
+ errors: str='replace'
236
+ ):
237
+ end_reason = f"Gen length {len(tokens)}"
238
+ eod_token_idx = context_length
239
+ for eod_token_idx in range(context_length, len(tokens)):
240
+ if tokens[eod_token_idx] in eod_token_ids:
241
+ end_reason = f"Gen {tokenizer.decode([tokens[eod_token_idx]])!r}"
242
+ break
243
+
244
+ trim_decode_tokens = tokenizer.decode(tokens[:eod_token_idx], errors=errors)[raw_text_len:]
245
+ if verbose:
246
+ print("\nRaw Generate w/o EOD:", tokenizer.decode(tokens, errors=errors)[raw_text_len:])
247
+ print("\nRaw Generate:", trim_decode_tokens)
248
+ print("\nEnd Reason:", end_reason)
249
+ for stop_word in stop_words:
250
+ trim_decode_tokens = trim_decode_tokens.replace(stop_word, "").strip()
251
+ trim_decode_tokens = trim_decode_tokens.strip()
252
+ if verbose:
253
+ print("\nGenerate:", trim_decode_tokens)
254
+
255
+ if return_end_reason:
256
+ return trim_decode_tokens, end_reason
257
+ else:
258
+ return trim_decode_tokens
259
+
260
+
261
+ def decode_tokens(
262
+ tokens: Union[torch.LongTensor, TokensType],
263
+ tokenizer: PreTrainedTokenizer,
264
+ raw_text_len: int,
265
+ context_length: int,
266
+ chat_format: str,
267
+ verbose: bool = False,
268
+ return_end_reason: bool = False,
269
+ errors: str="replace",
270
+ ) -> str:
271
+ if torch.is_tensor(tokens):
272
+ tokens = tokens.cpu().numpy().tolist()
273
+
274
+ if chat_format == "chatml":
275
+ return _decode_chatml(
276
+ tokens,
277
+ stop_words=[],
278
+ eod_token_ids=[tokenizer.im_start_id, tokenizer.im_end_id],
279
+ tokenizer=tokenizer,
280
+ raw_text_len=raw_text_len,
281
+ context_length=context_length,
282
+ verbose=verbose,
283
+ return_end_reason=return_end_reason,
284
+ errors=errors,
285
+ )
286
+ elif chat_format == "raw":
287
+ return _decode_default(
288
+ tokens,
289
+ stop_words=["<|endoftext|>"],
290
+ eod_words=["<|endoftext|>"],
291
+ tokenizer=tokenizer,
292
+ raw_text_len=raw_text_len,
293
+ verbose=verbose,
294
+ return_end_reason=return_end_reason,
295
+ errors=errors,
296
+ )
297
+ else:
298
+ raise NotImplementedError(f"Unknown chat format {chat_format!r}")
299
+
300
+
301
+ class StopWordsLogitsProcessor(LogitsProcessor):
302
+ """
303
+ :class:`transformers.LogitsProcessor` that enforces that when specified sequences appear, stop geration.
304
+
305
+ Args:
306
+ stop_words_ids (:obj:`List[List[int]]`):
307
+ List of list of token ids of stop ids. In order to get the tokens of the words
308
+ that should not appear in the generated text, use :obj:`tokenizer(bad_word,
309
+ add_prefix_space=True).input_ids`.
310
+ eos_token_id (:obj:`int`):
311
+ The id of the `end-of-sequence` token.
312
+ """
313
+
314
+ def __init__(self, stop_words_ids: Iterable[Iterable[int]], eos_token_id: int):
315
+
316
+ if not isinstance(stop_words_ids, List) or len(stop_words_ids) == 0:
317
+ raise ValueError(
318
+ f"`stop_words_ids` has to be a non-emtpy list, but is {stop_words_ids}."
319
+ )
320
+ if any(not isinstance(bad_word_ids, list) for bad_word_ids in stop_words_ids):
321
+ raise ValueError(
322
+ f"`stop_words_ids` has to be a list of lists, but is {stop_words_ids}."
323
+ )
324
+ if any(
325
+ any(
326
+ (not isinstance(token_id, (int, np.integer)) or token_id < 0)
327
+ for token_id in stop_word_ids
328
+ )
329
+ for stop_word_ids in stop_words_ids
330
+ ):
331
+ raise ValueError(
332
+ f"Each list in `stop_words_ids` has to be a list of positive integers, but is {stop_words_ids}."
333
+ )
334
+
335
+ self.stop_words_ids = list(
336
+ filter(
337
+ lambda bad_token_seq: bad_token_seq != [eos_token_id], stop_words_ids
338
+ )
339
+ )
340
+ self.eos_token_id = eos_token_id
341
+ for stop_token_seq in self.stop_words_ids:
342
+ assert (
343
+ len(stop_token_seq) > 0
344
+ ), "Stop words token sequences {} cannot have an empty list".format(
345
+ stop_words_ids
346
+ )
347
+
348
+ def __call__(
349
+ self, input_ids: torch.LongTensor, scores: torch.FloatTensor
350
+ ) -> torch.FloatTensor:
351
+ stopped_samples = self._calc_stopped_samples(input_ids)
352
+ for i, should_stop in enumerate(stopped_samples):
353
+ if should_stop:
354
+ scores[i, self.eos_token_id] = float(2**15)
355
+ return scores
356
+
357
+ def _tokens_match(self, prev_tokens: torch.LongTensor, tokens: List[int]) -> bool:
358
+ if len(tokens) == 0:
359
+ # if bad word tokens is just one token always ban it
360
+ return True
361
+ elif len(tokens) > len(prev_tokens):
362
+ # if bad word tokens are longer then prev input_ids they can't be equal
363
+ return False
364
+ elif prev_tokens[-len(tokens) :].tolist() == tokens:
365
+ # if tokens match
366
+ return True
367
+ else:
368
+ return False
369
+
370
+ def _calc_stopped_samples(self, prev_input_ids: Iterable[int]) -> Iterable[int]:
371
+ stopped_samples = []
372
+ for prev_input_ids_slice in prev_input_ids:
373
+ match = False
374
+ for stop_token_seq in self.stop_words_ids:
375
+ if self._tokens_match(prev_input_ids_slice, stop_token_seq):
376
+ # if tokens do not match continue
377
+ match = True
378
+ break
379
+ stopped_samples.append(match)
380
+
381
+ return stopped_samples
382
+
383
+
384
+ def top_k_logits(logits, top_k=0, top_p=0.0, filter_value=-float("Inf")):
385
+ """This function has been mostly taken from huggingface conversational
386
+ ai code at
387
+ https://medium.com/huggingface/how-to-build-a-state-of-the-art-
388
+ conversational-ai-with-transfer-learning-2d818ac26313"""
389
+
390
+ if top_k > 0:
391
+ # Remove all tokens with a probability less than the
392
+ # last token of the top-k
393
+ indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
394
+ logits[indices_to_remove] = filter_value
395
+
396
+ if top_p > 0.0:
397
+ # Cconvert to 1D
398
+ sorted_logits, sorted_indices = torch.sort(logits, descending=True, dim=-1)
399
+ cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
400
+
401
+ # Remove tokens with cumulative probability above the threshold
402
+ sorted_indices_to_remove = cumulative_probs > top_p
403
+ # Shift the indices to the right to keep also the first token
404
+ # above the threshold
405
+ sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
406
+ sorted_indices_to_remove[..., 0] = 0
407
+ for i in range(sorted_indices.size(0)):
408
+ indices_to_remove = sorted_indices[i][sorted_indices_to_remove[i]]
409
+ logits[i][indices_to_remove] = filter_value
410
+
411
+ return logits
412
+
413
+
414
+ def switch(val1, val2, boolean):
415
+ boolean = boolean.type_as(val1)
416
+ return (1 - boolean) * val1 + boolean * val2
tokenization_qwen.py ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Tokenization classes for QWen."""
7
+
8
+ import base64
9
+ import logging
10
+ import os
11
+ import unicodedata
12
+ from typing import Collection, Dict, List, Set, Tuple, Union
13
+
14
+ import tiktoken
15
+ from transformers import PreTrainedTokenizer, AddedToken
16
+
17
+ logger = logging.getLogger(__name__)
18
+
19
+
20
+ VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken"}
21
+
22
+ PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
23
+ ENDOFTEXT = "<|endoftext|>"
24
+ IMSTART = "<|im_start|>"
25
+ IMEND = "<|im_end|>"
26
+ # as the default behavior is changed to allow special tokens in
27
+ # regular texts, the surface forms of special tokens need to be
28
+ # as different as possible to minimize the impact
29
+ EXTRAS = tuple((f"<|extra_{i}|>" for i in range(205)))
30
+ # changed to use actual index to avoid misconfiguration with vocabulary expansion
31
+ SPECIAL_START_ID = 151643
32
+ SPECIAL_TOKENS = tuple(
33
+ enumerate(
34
+ (
35
+ (
36
+ ENDOFTEXT,
37
+ IMSTART,
38
+ IMEND,
39
+ )
40
+ + EXTRAS
41
+ ),
42
+ start=SPECIAL_START_ID,
43
+ )
44
+ )
45
+ SPECIAL_TOKENS_SET = set(t for i, t in SPECIAL_TOKENS)
46
+
47
+
48
+ def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
49
+ with open(tiktoken_bpe_file, "rb") as f:
50
+ contents = f.read()
51
+ return {
52
+ base64.b64decode(token): int(rank)
53
+ for token, rank in (line.split() for line in contents.splitlines() if line)
54
+ }
55
+
56
+
57
+ class QWenTokenizer(PreTrainedTokenizer):
58
+ """QWen tokenizer."""
59
+
60
+ vocab_files_names = VOCAB_FILES_NAMES
61
+
62
+ def __init__(
63
+ self,
64
+ vocab_file,
65
+ errors="replace",
66
+ extra_vocab_file=None,
67
+ **kwargs,
68
+ ):
69
+ super().__init__(**kwargs)
70
+
71
+ # how to handle errors in decoding UTF-8 byte sequences
72
+ # use ignore if you are in streaming inference
73
+ self.errors = errors
74
+
75
+ self.mergeable_ranks = _load_tiktoken_bpe(vocab_file) # type: Dict[bytes, int]
76
+ self.special_tokens = {
77
+ token: index
78
+ for index, token in SPECIAL_TOKENS
79
+ }
80
+
81
+ # try load extra vocab from file
82
+ if extra_vocab_file is not None:
83
+ used_ids = set(self.mergeable_ranks.values()) | set(self.special_tokens.values())
84
+ extra_mergeable_ranks = _load_tiktoken_bpe(extra_vocab_file)
85
+ for token, index in extra_mergeable_ranks.items():
86
+ if token in self.mergeable_ranks:
87
+ logger.info(f"extra token {token} exists, skipping")
88
+ continue
89
+ if index in used_ids:
90
+ logger.info(f'the index {index} for extra token {token} exists, skipping')
91
+ continue
92
+ self.mergeable_ranks[token] = index
93
+ # the index may be sparse after this, but don't worry tiktoken.Encoding will handle this
94
+
95
+ enc = tiktoken.Encoding(
96
+ "Qwen",
97
+ pat_str=PAT_STR,
98
+ mergeable_ranks=self.mergeable_ranks,
99
+ special_tokens=self.special_tokens,
100
+ )
101
+ assert (
102
+ len(self.mergeable_ranks) + len(self.special_tokens) == enc.n_vocab
103
+ ), f"{len(self.mergeable_ranks) + len(self.special_tokens)} != {enc.n_vocab} in encoding"
104
+
105
+ self.decoder = {
106
+ v: k for k, v in self.mergeable_ranks.items()
107
+ } # type: dict[int, bytes|str]
108
+ self.decoder.update({v: k for k, v in self.special_tokens.items()})
109
+
110
+ self.tokenizer = enc # type: tiktoken.Encoding
111
+
112
+ self.eod_id = self.tokenizer.eot_token
113
+ self.im_start_id = self.special_tokens[IMSTART]
114
+ self.im_end_id = self.special_tokens[IMEND]
115
+
116
+ def __getstate__(self):
117
+ # for pickle lovers
118
+ state = self.__dict__.copy()
119
+ del state["tokenizer"]
120
+ return state
121
+
122
+ def __setstate__(self, state):
123
+ # tokenizer is not python native; don't pass it; rebuild it
124
+ self.__dict__.update(state)
125
+ enc = tiktoken.Encoding(
126
+ "Qwen",
127
+ pat_str=PAT_STR,
128
+ mergeable_ranks=self.mergeable_ranks,
129
+ special_tokens=self.special_tokens,
130
+ )
131
+ self.tokenizer = enc
132
+
133
+ def __len__(self) -> int:
134
+ return self.tokenizer.n_vocab
135
+
136
+ def get_vocab(self) -> Dict[bytes, int]:
137
+ return self.mergeable_ranks
138
+
139
+ def convert_tokens_to_ids(
140
+ self, tokens: Union[bytes, str, List[Union[bytes, str]]]
141
+ ) -> List[int]:
142
+ ids = []
143
+ if isinstance(tokens, (str, bytes)):
144
+ if tokens in self.special_tokens:
145
+ return self.special_tokens[tokens]
146
+ else:
147
+ return self.mergeable_ranks.get(tokens)
148
+ for token in tokens:
149
+ if token in self.special_tokens:
150
+ ids.append(self.special_tokens[token])
151
+ else:
152
+ ids.append(self.mergeable_ranks.get(token))
153
+ return ids
154
+
155
+ def _add_tokens(
156
+ self,
157
+ new_tokens: Union[List[str], List[AddedToken]],
158
+ special_tokens: bool = False,
159
+ ) -> int:
160
+ if not special_tokens and new_tokens:
161
+ raise ValueError("Adding regular tokens is not supported")
162
+ for token in new_tokens:
163
+ surface_form = token.content if isinstance(token, AddedToken) else token
164
+ if surface_form not in SPECIAL_TOKENS_SET:
165
+ raise ValueError("Adding unknown special tokens is not supported")
166
+ return 0
167
+
168
+ def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
169
+ """
170
+ Save only the vocabulary of the tokenizer (vocabulary).
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ file_path = os.path.join(save_directory, "qwen.tiktoken")
176
+ with open(file_path, "w", encoding="utf8") as w:
177
+ for k, v in self.mergeable_ranks.items():
178
+ line = base64.b64encode(k).decode("utf8") + " " + str(v) + "\n"
179
+ w.write(line)
180
+ return (file_path,)
181
+
182
+ def tokenize(
183
+ self,
184
+ text: str,
185
+ allowed_special: Union[Set, str] = "all",
186
+ disallowed_special: Union[Collection, str] = (),
187
+ **kwargs,
188
+ ) -> List[Union[bytes, str]]:
189
+ """
190
+ Converts a string in a sequence of tokens.
191
+
192
+ Args:
193
+ text (`str`):
194
+ The sequence to be encoded.
195
+ allowed_special (`Literal["all"]` or `set`):
196
+ The surface forms of the tokens to be encoded as special tokens in regular texts.
197
+ Default to "all".
198
+ disallowed_special (`Literal["all"]` or `Collection`):
199
+ The surface forms of the tokens that should not be in regular texts and trigger errors.
200
+ Default to an empty tuple.
201
+
202
+ kwargs (additional keyword arguments, *optional*):
203
+ Will be passed to the underlying model specific encode method.
204
+
205
+ Returns:
206
+ `List[bytes|str]`: The list of tokens.
207
+ """
208
+ tokens = []
209
+ text = unicodedata.normalize("NFC", text)
210
+
211
+ # this implementation takes a detour: text -> token id -> token surface forms
212
+ for t in self.tokenizer.encode(
213
+ text, allowed_special=allowed_special, disallowed_special=disallowed_special
214
+ ):
215
+ tokens.append(self.decoder[t])
216
+ return tokens
217
+
218
+ def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
219
+ """
220
+ Converts a sequence of tokens in a single string.
221
+ """
222
+ text = ""
223
+ temp = b""
224
+ for t in tokens:
225
+ if isinstance(t, str):
226
+ if temp:
227
+ text += temp.decode("utf-8", errors=self.errors)
228
+ temp = b""
229
+ text += t
230
+ elif isinstance(t, bytes):
231
+ temp += t
232
+ else:
233
+ raise TypeError("token should only be of type types or str")
234
+ if temp:
235
+ text += temp.decode("utf-8", errors=self.errors)
236
+ return text
237
+
238
+ @property
239
+ def vocab_size(self):
240
+ return self.tokenizer.n_vocab
241
+
242
+ def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
243
+ """Converts an id to a token, special tokens included"""
244
+ if index in self.decoder:
245
+ return self.decoder[index]
246
+ raise ValueError("unknown ids")
247
+
248
+ def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
249
+ """Converts a token to an id using the vocab, special tokens included"""
250
+ if token in self.special_tokens:
251
+ return self.special_tokens[token]
252
+ if token in self.mergeable_ranks:
253
+ return self.mergeable_ranks[token]
254
+ raise ValueError("unknown token")
255
+
256
+ def _tokenize(self, text: str, **kwargs):
257
+ """
258
+ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
259
+ vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
260
+
261
+ Do NOT take care of added tokens.
262
+ """
263
+ raise NotImplementedError
264
+
265
+ def _decode(
266
+ self,
267
+ token_ids: Union[int, List[int]],
268
+ skip_special_tokens: bool = False,
269
+ errors: str = None,
270
+ **kwargs,
271
+ ) -> str:
272
+ if isinstance(token_ids, int):
273
+ token_ids = [token_ids]
274
+ if skip_special_tokens:
275
+ token_ids = [i for i in token_ids if i < self.eod_id]
276
+ return self.tokenizer.decode(token_ids, errors=errors or self.errors)
tokenizer_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_max_length": 8192,
3
+ "tokenizer_class": "QWenTokenizer",
4
+ "auto_map": {
5
+ "AutoTokenizer": [
6
+ "tokenization_qwen.QWenTokenizer",
7
+ null
8
+ ]
9
+ }
10
+ }