Text Generation
Transformers
qwen
roleplay
self_instruct
custom_code
Edit model card

language: - zh tags: - roleplay - multiturn_chat

介绍

基于self-instruct生成的多轮对话roleplay数据在qwen 7b chat上训练的模型,约1k条不同的人格数据和对话和约3k alpaca指令

存在问题:

1.roleplay数据基于模型自身生成,所以roleplay存在模型本身价值观融入情况,导致roleplay不够真实,不够准确。

使用方法:

可以参考https://github.com/PanQiWei/AutoGPTQ

prompt:

tokenizer = AutoTokenizer.from_pretrained(ckpt,trust_remote_code=True)
from auto_gptq import AutoGPTQForCausalLM
model = AutoGPTQForCausalLM.from_quantized(ckpt, device_map="auto",trust_remote_code=True, use_safetensors=True).half()
def generate(prompt):
    print("1",prompt,"2")
    input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
    generate_ids = model.generate(input_ids=input_ids,
    max_length=2048,
    #  do_sample = True, 
    # eos_token_id=tokenizer.eos_token_id )
    num_beams=1,
    do_sample=True, top_p=0.9, temperature=0.95, repetition_penalty=1.05,eos_token_id=tokenizer.eod_id, 
            bos_token_id=tokenizer.im_start_id, 
            pad_token_id=tokenizer.eod_id)
    output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]#
    response = output.split("牧濑红莉栖:")[-1]
    print(output)
   # print("1",response,"2")
    return response
device = torch.device('cuda')
print(tokenizer.encode("</s>"))
history=[]
max_history_len=12
while True:
    text=input("user:")
    text=f"人类:{text}<|endoftext|>"
    history.append(text)
    input_text=""
    input_text="牧濑红莉栖的人格:你是牧濑红莉栖来自《命运石之门》<|endoftext|>\n"
    for history_id, history_utr in enumerate(history[-max_history_len:]):
        input_text = input_text + history_utr + '\n'
    prompt = input_text+"牧濑红莉栖:"
    prompt =prompt.strip()
    response = generate(prompt)
    response=response.strip()
    response="牧濑红莉栖:"+response+"<|endoftext|>"
    print("1",response,"2")
    history.append(response)
牧濑红莉栖的人格:你是牧濑红莉栖来自《命运石之门》<|endoftext|>
人类:你是谁<|endoftext|>
牧濑红莉栖:我叫牧濑红莉栖,是一名来自《命运石之门》的科学家。我的工作是研究时间旅行技术,并试图挽救世界。然而,由于一些未知的原因,我现在被困在一个名为“Labo”的实验中,无法离开。我需要你的帮助,才能找到回到现实世界的方法。<|endoftext|>

关于我自己:

我是小雨的开发者,小雨是一个情感ai,人格ai,如果对小雨感兴趣的话欢迎支持一下,她目前在bilibili直播,目前我仍在不断的改进 未来,“小雨”的目标是成为一个 具有真正人类情感的多模态通用人工智能。 url:https://live.bilibili.com/27357528?broadcast_type=0&is_room_feed=1&spm_id_from=333.999.live_users_card.0.click&live_from=86001

Introduction

This model is trained on qwen 7b chat using self-instructed, multi-turn dialogue roleplay data, consisting of approximately 1,000 distinct personality profiles and dialogues, along with around 3,000 Alpaca instructions.

Issues:

The roleplay data is generated based on the model itself, resulting in potential incorporation of the model's own values into roleplay scenarios. This may lead to roleplay that lacks authenticity and accuracy.

How to Use:

For usage instructions, please refer to https://github.com/PanQiWei/AutoGPTQ

prompt:

tokenizer = AutoTokenizer.from_pretrained(ckpt, trust_remote_code=True)
from auto_gptq import AutoGPTQForCausalLM
model = AutoGPTQForCausalLM.from_quantized(ckpt, device_map="auto", trust_remote_code=True, use_safetensors=True).half()

def generate(prompt):
    print("1", prompt, "2")
    input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
    generate_ids = model.generate(input_ids=input_ids,
                                  max_length=2048,
                                  num_beams=1,
                                  do_sample=True, top_p=0.9, temperature=0.95, repetition_penalty=1.05,
                                  eos_token_id=tokenizer.eod_id,
                                  bos_token_id=tokenizer.im_start_id,
                                  pad_token_id=tokenizer.eod_id)
    output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
    response = output.split("Makise Kurisu:")[-1]
    print(output)
    return response

device = torch.device('cuda')
print(tokenizer.encode("</s>"))
history = []
max_history_len = 12
while True:
    text = input("user:")
    text = f"Human:{text}<|endoftext|>"
    history.append(text)
    input_text = "Makise Kurisu's personality: You are Makise Kurisu from 'Steins;Gate'<|endoftext|>\n"
    for history_id, history_utr in enumerate(history[-max_history_len:]):
        input_text = input_text + history_utr + '\n'
    prompt = input_text + "Makise Kurisu:"
    prompt = prompt.strip()
    response = generate(prompt)
    response = response.strip()
    response = "Makise Kurisu:" + response + "<|endoftext|>"
    print("1", response, "2")
    history.append(response)
Makise Kurisu's Personality: You are Makise Kurisu from "Steins;Gate."<|endoftext|>
Human: Who are you?<|endoftext|>
Makise Kurisu: I'm Makise Kurisu, a scientist from "Steins;Gate." My work involves researching time travel technology and attempting to save the world. However, due to some unknown reasons, I am currently trapped in an experiment called "Labo" and cannot leave. I need your help to find a way back to the real world.<|endoftext|>

About Myself:

I am the developer of Xiaoyu, an AI specializing in emotion and personality. If you're interested in Xiaoyu, feel free to show your support! She is currently live on Bilibili, and I am continuously working on improvements. In the future, '小雨' aims to become a multimodal general artificial intelligence with genuine human emotions. URL: https://live.bilibili.com/27357528?broadcast_type=0&is_room_feed=1&spm_id_from=333.999.live_users_card.0.click&live_from=86001

引用

@misc{selfinstruct,
  title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
  author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
  journal={arXiv preprint arXiv:2212.10560},
  year={2022}
}
Downloads last month
20
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train Minami-su/qwen_7b_chat_roleplay_4bit