File size: 3,629 Bytes
5aaca30 3f167e2 5aaca30 3f167e2 4591571 3f167e2 4591571 3f167e2 4591571 f08bc4d 3f167e2 4591571 3f167e2 19b5308 74fce30 19b5308 3f167e2 4591571 3f167e2 19b5308 3f167e2 4591571 3f167e2 4591571 3f167e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
---
language:
- zh
tags:
- chatglm
- pytorch
- zh
- Text2Text-Generation
license: "apache-2.0"
widget:
- text: "对下面中文拼写纠错:\n少先队员因该为老人让坐。\n答:"
---
# Chinese Spelling Correction LoRA Model
ChatGLM中文纠错LoRA模型
`chatglm-6b-csc-zh-lora` evaluate test data:
The overall performance of chatglm-6b-csc-zh-lora on CSC **test**:
|prefix|input_text|target_text|pred|
|:-- |:--- |:--- |:-- |
|对下面中文拼写纠错:|少先队员因该为老人让坐。|少先队员应该为老人让座。|少先队员应该为老人让座。\n错误字:因,坐|
在CSC测试集上生成结果纠错准确率高,由于是基于大模型,结果常常能带给人惊喜,不仅能纠错,还带有句子润色和改写功能。
## Usage
本项目开源在textgen项目:[textgen](https://github.com/shibing624/textgen),可支持ChatGLM原生模型和LoRA微调后的模型,通过如下命令调用:
Install package:
```shell
pip install -U textgen
```
```python
from textgen import ChatGlmModel
model = ChatGlmModel("chatglm", "THUDM/chatglm-6b", peft_name="shibing624/chatglm-6b-csc-zh-lora")
r = model.predict(["对下面中文拼写纠错:\n少先队员因该为老人让坐。\n答:"])
print(r) # ['少先队员应该为老人让座。\n错误字:因,坐']
```
## Usage (HuggingFace Transformers)
Without [textgen](https://github.com/shibing624/textgen), you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install transformers
```
```python
import sys
from peft import PeftModel
from transformers import AutoModel, AutoTokenizer
sys.path.append('..')
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, device_map='auto')
model = PeftModel.from_pretrained(model, "shibing624/chatglm-6b-csc-zh-lora")
model = model.half().cuda() # fp16
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
sents = ['对下面中文拼写纠错:\n少先队员因该为老人让坐。\n答:',
'对下面中文拼写纠错:\n下个星期,我跟我朋唷打算去法国玩儿。\n答:']
for s in sents:
response = model.chat(tokenizer, s, max_length=128, eos_token_id=tokenizer.eos_token_id)
print(response)
```
output:
```shell
('少先队员应该为老人让座。\n错误字:因,坐', [('对下面中文拼写纠错:\n少先队员因该为老人让坐。\n答:', '少先队员应该为老人让座。\n错误字:因,坐')])
('下个星期,我跟我朋友打算去法国玩儿。\n错误字:唷', [('对下面中文拼写纠错:\n下个星期,我跟我朋唷打算去法国玩儿。\n答:', '下个星期,我跟我朋友打算去法国玩儿。\n错误字:唷')])
```
模型文件组成:
```
chatglm-6b-csc-zh-lora
├── adapter_config.json
└── adapter_model.bin
```
#### 训练参数:
- num_epochs: 2
- batch_size: 4
- steps: 125600
- train_loss: 0.1055
- base model: THUDM/chatglm-6b
- train data: [shibing624/CSC](https://huggingface.co/datasets/shibing624/CSC)
### 训练数据集
#### 中文纠错数据集
- 数据:[shibing624/CSC](https://huggingface.co/datasets/shibing624/CSC)
如果需要训练ChatGLM模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen)
## Citation
```latex
@software{textgen,
author = {Xu Ming},
title = {textgen: Implementation of language model finetune},
year = {2021},
url = {https://github.com/shibing624/textgen},
}
```
|