Upload 2 files
Browse files- README.md +9 -0
- README_en.md +19 -9
README.md
CHANGED
@@ -11,6 +11,7 @@ pipeline_tag: text-generation
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
|
|
14 |
* 2023.12.16更新:发布[论文(中文版)](https://cloud.tsinghua.edu.cn/d/5894ec4442e54a6aac96/)、[论文(英文版)](https://arxiv.org/abs/2312.11193)
|
15 |
* 2023.12.14更新:发布经过微调的Qwen-14b-chat-yarn-32k,微调后的模型能适应32k长度(约4万汉字)的中英问答,相较于之前的通过位置插值得到的32k模型,几乎完全解决了多文档问答任务下召回率低(即 lost in middle 现象)的问题。
|
16 |
<br>
|
@@ -31,6 +32,14 @@ pipeline_tag: text-generation
|
|
31 |
| LongAlpaca-7b-32k-chinese-v2 | 0.12 |
|
32 |
| CausalLM-14b | 0.086 |
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
Qwen-14b-chat-yarn-32k经过微调后,在多文档问答(或检索)任务上提升非常显著,大幅领先其他同规模的模型。
|
35 |
|
36 |
<br>
|
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
14 |
+
* 2023.12.23更新:发布LongBench的passage_retrieval_en的评测结果
|
15 |
* 2023.12.16更新:发布[论文(中文版)](https://cloud.tsinghua.edu.cn/d/5894ec4442e54a6aac96/)、[论文(英文版)](https://arxiv.org/abs/2312.11193)
|
16 |
* 2023.12.14更新:发布经过微调的Qwen-14b-chat-yarn-32k,微调后的模型能适应32k长度(约4万汉字)的中英问答,相较于之前的通过位置插值得到的32k模型,几乎完全解决了多文档问答任务下召回率低(即 lost in middle 现象)的问题。
|
17 |
<br>
|
|
|
32 |
| LongAlpaca-7b-32k-chinese-v2 | 0.12 |
|
33 |
| CausalLM-14b | 0.086 |
|
34 |
|
35 |
+
### LongBench的passage_retrieval_en的评测结果
|
36 |
+
| 模型 | 得分 (acc) |
|
37 |
+
|------------------------|----------|
|
38 |
+
| **Qwen-14b-chat-yarn-32k** | **0.945** |
|
39 |
+
| Qwen-14b-chat | 0.24 |
|
40 |
+
| chatglm3-32k | 0.815 |
|
41 |
+
| gpt-3.5-turbo-16k | 0.88 |
|
42 |
+
|
43 |
Qwen-14b-chat-yarn-32k经过微调后,在多文档问答(或检索)任务上提升非常显著,大幅领先其他同规模的模型。
|
44 |
|
45 |
<br>
|
README_en.md
CHANGED
@@ -11,6 +11,7 @@ pipeline_tag: text-generation
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
|
|
14 |
* Updated on December 16, 2023: Release [Paper](https://arxiv.org/abs/2312.11193)
|
15 |
* Updated on December 14, 2023: We have released the Qwen-14b-chat-yarn-32k model, which has been fine-tuned to handle Chinese and English question-answering tasks with a length of up to 32k (approximately 40,000 Chinese characters). This model addresses the low recall issue in multi-document question-answering tasks (also known as the "lost in middle" phenomenon) that was present in the previous 32k model obtained through position interpolation. <br>
|
16 |
<br>
|
@@ -20,15 +21,24 @@ pipeline_tag: text-generation
|
|
20 |
# Evaluation results in LongBench
|
21 |
### Evaluation results for passage_retrieval_zh in LongBench
|
22 |
|
23 |
-
| Models
|
24 |
-
|
25 |
-
| **Qwen-14b-chat-yarn-32k**
|
26 |
-
| gpt-3.5-turbo-16k
|
27 |
-
| chatglm3-32k
|
28 |
-
| Qwen-14b-chat
|
29 |
-
| Qwen-14b-chat-32k-lora
|
30 |
-
| LongAlpaca-7b-32k-chinese-v2
|
31 |
-
| CausalLM-14b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
Qwen-14b-chat-yarn-32k has shown significant improvement in multi-document question-answering (or retrieval) tasks and outperforms other models of similar scale.
|
34 |
<br>
|
|
|
11 |
---
|
12 |
**Read this in other languages: [English](README_en.md), [中文](README.md).**
|
13 |
|
14 |
+
* Updated on December 23, 2023: Release the evaluation results of passage_retrieval_en in LongBench
|
15 |
* Updated on December 16, 2023: Release [Paper](https://arxiv.org/abs/2312.11193)
|
16 |
* Updated on December 14, 2023: We have released the Qwen-14b-chat-yarn-32k model, which has been fine-tuned to handle Chinese and English question-answering tasks with a length of up to 32k (approximately 40,000 Chinese characters). This model addresses the low recall issue in multi-document question-answering tasks (also known as the "lost in middle" phenomenon) that was present in the previous 32k model obtained through position interpolation. <br>
|
17 |
<br>
|
|
|
21 |
# Evaluation results in LongBench
|
22 |
### Evaluation results for passage_retrieval_zh in LongBench
|
23 |
|
24 |
+
| Models | Accuracy |
|
25 |
+
|------------------------------|----------|
|
26 |
+
| **Qwen-14b-chat-yarn-32k** | **0.94** |
|
27 |
+
| gpt-3.5-turbo-16k | 0.81 |
|
28 |
+
| chatglm3-32k | 0.725 |
|
29 |
+
| Qwen-14b-chat | 0.525 |
|
30 |
+
| Qwen-14b-chat-32k-lora | 0.34 |
|
31 |
+
| LongAlpaca-7b-32k-chinese-v2 | 0.12 |
|
32 |
+
| CausalLM-14b | 0.086 |
|
33 |
+
|
34 |
+
### Evaluation results for passage_retrieval_en in LongBench
|
35 |
+
| Models | Accuracy |
|
36 |
+
|------------------------|----------|
|
37 |
+
| **Qwen-14b-chat-yarn-32k** | **0.945** |
|
38 |
+
| Qwen-14b-chat | 0.24 |
|
39 |
+
| chatglm3-32k | 0.815 |
|
40 |
+
| gpt-3.5-turbo-16k | 0.88 |
|
41 |
+
|
42 |
|
43 |
Qwen-14b-chat-yarn-32k has shown significant improvement in multi-document question-answering (or retrieval) tasks and outperforms other models of similar scale.
|
44 |
<br>
|