Model Card for Model ID
Input modern Chinese sentences and generate ancient Chinese style sentences
输入现代汉语句子,生成古汉语风格的句子
Model Details
Model Description
Based on the Xunzi-Qwen2-1.5B base model and use some data from the "Classical Chinese (Ancient Chinese) - Modern Chinese Parallel Corpus" to do LoRA fine-tuning. You can convert modern Chinese sentences into classical Chinese sentences, making them more elegant. For the fine-tuning code, see the GitHub page of this model.
基于荀子基座大模型,采用“文言文(古文)- 现代文平行语料”中的部分数据进行LoRA微调训练,可以将现代汉语转化为古汉语,显得更有文采。 微调代码和过程参见本模型的GitHub界面
- Developed by: cofeg
- Model type: Text Generation
- Language(s) (NLP): Simplified Chinese
- Finetuned from model [optional]: Xunzi-Qwen2-1.5B
Model Sources
- Repository: https://huggingface.co/cofeg/Finetuned-Xunzi-Qwen2-1.5B-for-ancient-text-generation/
- Demo: https://huggingface.co/spaces/cofeg/ancient_Chinese_text_generator_1.5B
Uses
You can visit my space and try it out. It may take more than two minutes before the model begin to generate. If you want to run the model locally or further fine-tune, please refer to the GitHub page of this model.
可以直接访问我的空间试用。可能需要等待两分钟以上才会开始生成。 如果想要本地运行或进一步微调,参考本模型的GitHub界面
Direct Use
This model is fine-tuned based on the base model not capable of chatting. It can only be used for text generation. The fine-tuning input data has the following format: "现代文:……。 古文:", and the 现代文 contains only one sentence. When directly using the model it is necessary to ensure that the input is in this format.
本模型基于基座模型微调,并不具备聊天功能,仅用于文本生成。 本模型的微调输入数据具有如下格式:“现代文:……。 古文:”,且现代文仅包含一个句子。本地直接生成时需保证输入为此格式。
How to Get Started with the Model
First download the model to a local path:
git lfs install
git clone https://huggingface.co/cofeg/Finetuned-Xunzi-Qwen2-1.5B-for-ancient-text-generation/
Set the path and run model inference locally:
import os
import gradio as gr
from transformers import AutoModelForCausalLM, AutoTokenizer
import re
from utils.generate import generate_answer
fine_tuned_model_path = 'path/to/the/downloaded/model'
tokenizer = AutoTokenizer.from_pretrained(fine_tuned_model_path)
model = AutoModelForCausalLM.from_pretrained(fine_tuned_model_path, torch_dtype="auto", device_map='cuda')
model.generation_config.pad_token_id = tokenizer.pad_token_id # To avoid warnings
def split_and_generate(modern_text, progress=gr.Progress()):
progress(0, desc="开始处理")
# Split the input text into sentences for the model is trained on sentence pairs
sentences = re.findall(r'[^。!?]*[。!?]', modern_text)
responses = ""
for sentence in progress.tqdm(sentences, desc="生成中……"):
input = "现代文:" + sentence + " 古文:"
response = generate_answer(input, tokenizer, DEVICE, model)
responses += response
return responses
demo = gr.Interface(fn=split_and_generate,
inputs=[gr.Textbox(label="现代文", lines=10)],
outputs=[gr.Textbox(label="古文", lines=10)])
demo.launch()
Training Details
See the GitHub page of this model.
- Downloads last month
- 448