Edit model card

intro

本模型的功能是通过接受一个长文本并返回根据该长文本生成的问答对。本人做这个工具的初衷是为了训练RAG,将切分后的文本喂给该工具而产生要去训练RAG的数据集。优势在于可以通过用大参数量生成的数据集去投喂给小参数量的模型去搭建RAG。本工具通过对Qwen2-1.5B-Instruct进行lora微调实现。

数据集使用Chinese-Squad:https://github.com/pluto-junzeng/ChineseSquad, 经过转换,变成alpaca格式,构建提示词工程的代码如下

import json
file_path = '/root/dev-zen-v1.0.json'
SEP_TOKEN = "<sep>"

data_loader = []

with open(file_path, 'r') as f:
    data = json.load(f)
    for content in data['data']:
        title = content['title']
        paragraphs =  content['paragraphs']
        for paragraph in paragraphs:
            context = paragraph['context']
            qas = paragraph['qas']
            for qa_pair in qas:
                question = qa_pair.get('question', None)
                answers = qa_pair.get('answers', None)
                for answer in answers:
                    answer_text = answer.get('text', None)
                    if answer_text and question != None:
                        data_loader.append({'title': title, 'context': context, 'question': question, 'answer': answer_text})


prompt_templatae = """根据下面input的上下文,生成和上下文有关的问答对,并输出到output中。"""

prompt_chunk = []

for i in data_loader:
    # prompt_chunk.append(prompt_templatae.format(i['context'],f"question:{i['question']} {SEP_TOKEN} answer:{i['answer']}"))
    prompt_chunk.append({"instruction": prompt_templatae, "input": i['context'], "output": ''})


with open('prompt_chunk_predict.json', 'w') as f:
    json.dump(prompt_chunk, f, ensure_ascii=False, indent=4)

转换好的数据集prompt_chunk_1.json

微调过程主要使用Llama_Factory。截断长度设置为600,使用bf16精度,batch_size设置为32(可以根据gpu性能降低),L20单卡训练

quickstart

安装环境依赖

pip install -r requirements.txt

下载Qwen2-1.5B-Instruct的模型文件:

git lfs install #先确保安装过lfs
git clone https://www.modelscope.cn/qwen/qwen2-1.5b-instruct.git

通过在base模型上载入checkpoint来使用。以下是一个示例脚本predict.py

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import numpy as np
from datasets import Dataset
from peft import PeftModel
import json

checkpoint_path = '' #checkpoint
base_model_name = "Qwen2-1.5B-Instruct"  # 基础模型
SEP_TOKEN = '<sep>'

# 加载基础模型和 tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = AutoModelForCausalLM.from_pretrained(base_model_name)
# 使用 PeftModel 从微调的 checkpoint 加载权重
model = PeftModel.from_pretrained(model, checkpoint_path)

context = """""" #在这里输入需要生成问答对的context

instruction = """根据下面input的上下文,生成和上下文有关的问答对,并输出到output中。"""
input_prompt = f"instruction: {instruction} input: {context} output:"

input_ids = tokenizer(input_prompt, return_tensors="pt")['input_ids']

output = model.generate(
        input_ids=input_ids,
        max_new_tokens = 64, #生成的问答对的字数限制,其实可以设的更低
        num_return_sequences=5, #返回的问答对个数
        pad_token_id=tokenizer.eos_token_id,
        temperature=0.8, #随机度
    )

for i in range(len(output)):
    output_text = tokenizer.decode(output[i], skip_special_tokens=True)[len(input_prompt):] #解码
    print(output_text) #输出

输出示例 ![[010E10A8-AA3F-4BFB-B06C-5D2254D04F3A_1_201_a.jpeg]]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for shashaNYU/autodataset_based_on_Qwen2_1.5B

Finetuned
this model