|
--- |
|
library_name: peft |
|
base_model: Qwen/Qwen2.5-7B-Instruct |
|
license: apache-2.0 |
|
datasets: |
|
- shibing624/chinese_text_correction |
|
language: |
|
- zh |
|
metrics: |
|
- f1 |
|
tags: |
|
- text-generation-inference |
|
widget: |
|
- text: "文本纠错:\n少先队员因该为老人让坐。" |
|
--- |
|
|
|
|
|
|
|
# Chinese Text Correction Model |
|
中文文本纠错模型chinese-text-correction-7b-lora:用于拼写纠错、语法纠错 |
|
|
|
`shibing624/chinese-text-correction-7b-lora` evaluate test data: |
|
|
|
The overall performance of CSC **test**: |
|
|
|
|input_text|predict_text| |
|
|:--- |:--- | |
|
|文本纠错:\n少先队员因该为老人让坐。|少先队员应该为老人让座。| |
|
|
|
# Models |
|
|
|
| Name | Base Model | Download | |
|
|-----------------|-------------------|-----------------------------------------------------------------------| |
|
| chinese-text-correction-1.5b | Qwen/Qwen2.5-1.5B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-1.5b) | |
|
| chinese-text-correction-1.5b-lora | Qwen/Qwen2.5-1.5B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-1.5b-lora) | |
|
| chinese-text-correction-7b | Qwen/Qwen2.5-7B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-7b) | |
|
| chinese-text-correction-7b-lora | Qwen/Qwen2.5-7B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-7b-lora) | |
|
|
|
|
|
|
|
## Usage (pycorrector) |
|
|
|
本项目开源在`pycorrector`项目:[pycorrector](https://github.com/shibing624/pycorrector),可支持大模型微调后用于文本纠错,通过如下命令调用: |
|
|
|
Install package: |
|
```shell |
|
pip install -U pycorrector |
|
``` |
|
|
|
```python |
|
from pycorrector.gpt.gpt_corrector import GptCorrector |
|
|
|
if __name__ == '__main__': |
|
error_sentences = [ |
|
'真麻烦你了。希望你们好好的跳无', |
|
'少先队员因该为老人让坐', |
|
'机七学习是人工智能领遇最能体现智能的一个分知', |
|
'一只小鱼船浮在平净的河面上', |
|
'我的家乡是有明的渔米之乡', |
|
] |
|
m = GptCorrector("shibing624/chinese-text-correction-7b") |
|
|
|
batch_res = m.correct_batch(error_sentences) |
|
for i in batch_res: |
|
print(i) |
|
print() |
|
``` |
|
|
|
## Usage (HuggingFace Transformers) |
|
Without [pycorrector](https://github.com/shibing624/pycorrector), you can use the model like this: |
|
|
|
First, you pass your input through the transformer model, then you get the generated sentence. |
|
|
|
Install package: |
|
``` |
|
pip install transformers |
|
``` |
|
|
|
```python |
|
# pip install transformers |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
checkpoint = "shibing624/chinese-text-correction-7b" |
|
|
|
device = "cuda" # for GPU usage or "cpu" for CPU usage |
|
tokenizer = AutoTokenizer.from_pretrained(checkpoint) |
|
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) |
|
|
|
input_content = "文本纠错:\n少先队员因该为老人让坐。" |
|
|
|
messages = [{"role": "user", "content": input_content}] |
|
input_text=tokenizer.apply_chat_template(messages, tokenize=False) |
|
|
|
print(input_text) |
|
|
|
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) |
|
outputs = model.generate(inputs, max_new_tokens=1024, temperature=0, do_sample=False, repetition_penalty=1.08) |
|
|
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
output: |
|
```shell |
|
少先队员应该为老人让座。 |
|
``` |
|
|
|
|
|
模型文件组成: |
|
``` |
|
shibing624/chinese-text-correction-7b-lora |
|
├── adapter_config.json |
|
└── adapter_model.safetensors |
|
``` |
|
|
|
#### 训练参数: |
|
|
|
- num_epochs: 8 |
|
- batch_size: 2 |
|
- steps: 36000 |
|
- eval_loss: 0.12 |
|
- base model: Qwen/Qwen2.5-7B-Instruct |
|
- train data: [shibing624/chinese_text_correction](https://huggingface.co/datasets/shibing624/chinese_text_correction) |
|
- train time: 9 days 8 hours |
|
- eval_loss: ![](https://huggingface.co/shibing624/chinese-text-correction-7b-lora/resolve/main/eval_loss_7b.png) |
|
- train_loss: ![](https://huggingface.co/shibing624/chinese-text-correction-7b-lora/resolve/main/train_loss_7b.png) |
|
|
|
### 训练数据集 |
|
#### 中文纠错数据集 |
|
|
|
- 数据:[shibing624/chinese_text_correction](https://huggingface.co/datasets/shibing624/chinese_text_correction) |
|
|
|
如果需要训练Qwen的纠错模型,请参考[https://github.com/shibing624/pycorrector](https://github.com/shibing624/pycorrector) 或者 [https://github.com/shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT) |
|
|
|
### Framework versions |
|
|
|
- PEFT 0.11.1 |
|
|
|
## Citation |
|
|
|
```latex |
|
@software{pycorrector, |
|
author = {Xu Ming}, |
|
title = {pycorrector: Implementation of language model finetune}, |
|
year = {2024}, |
|
url = {https://github.com/shibing624/pycorrector}, |
|
} |
|
``` |
|
|