Text Generation
Transformers
Safetensors
Japanese
English
llama
text-generation-inference
Inference Endpoints
Edit model card

Model Card for "calm2-7b-chat-dpo-experimental"

cyberagent/calm2-7b-chatcyberagent/chatbot-arena-ja-calm2-7b-chat-experimentalデータセットを用いてDirect Preference Optimization (DPO)をしたモデルです。 DPOにはLow-Rank Adaptation (LoRA)を用いました。

Requirements, Usage, Chat Template

cyberagent/calm2-7b-chatと同様です。 同様のコード・プロンプトで動かすことができます。

import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

assert transformers.__version__ >= "4.34.1"

model = AutoModelForCausalLM.from_pretrained("cyberagent/calm2-7b-chat-dpo-experimental", device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("cyberagent/calm2-7b-chat-dpo-experimental")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = """USER: AIによって私達の暮らしはどのように変わりますか?
ASSISTANT: """

token_ids = tokenizer.encode(prompt, return_tensors="pt")
output_ids = model.generate(
    input_ids=token_ids.to(model.device),
    max_new_tokens=300,
    do_sample=True,
    temperature=0.8,
    streamer=streamer,
)

実験結果

ELYZA-tasks-100 (GPT-4 eval)

実験結果のランダム性を避けるため、greedy searchで出力しました。

calm2-7b-chat calm2-7b-chat-dpo
2.67 2.85

Japanese MT-Bench

以下の文をシステムプロンプト(system_message)としてcalm2-7b-chat-dpoとcalm2-7b-chatの評価を行いました。

"以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。"

このシステムプロンプトはstabilityai/japanese-stablelm-instruct-alpha-7bを評価するときに使われるものをそのまま使いました。 他のデコーディングパラメータはデフォルトのままです(ランダム性があります)。

calm2-7b-chat calm2-7b-chat-dpo
平均 6.1 6.7
extraction 4.1 5.4
humanities 8.2 8.4
reasoning 3.9 4.3
roleplay 6.4 7.0
stem 6.3 6.2
writing 7.7 9.1

Releases

1.0: v1 release (Jan 24, 2024)

Author

Yuu Jinnai ([email protected]), Standing on the shoulders of giants

Reference

本モデルの詳細は以下の論文を参照ください。

Yuu Jinnai. 2024. Does Cross-Cultural Alignment Change the Commonsense Morality of Language Models?. In Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP, pages 48–64, Bangkok, Thailand. Association for Computational Linguistics.

@inproceedings{jinnai-2024-cross,
    title = "Does Cross-Cultural Alignment Change the Commonsense Morality of Language Models?",
    author = "Jinnai, Yuu",
    editor = "Prabhakaran, Vinodkumar  and
      Dev, Sunipa  and
      Benotti, Luciana  and
      Hershcovich, Daniel  and
      Cabello, Laura  and
      Cao, Yong  and
      Adebara, Ife  and
      Zhou, Li",
    booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.c3nlp-1.5",
    pages = "48--64",
}
Downloads last month
1,093
Safetensors
Model size
7.01B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cyberagent/calm2-7b-chat-dpo-experimental

Finetuned
(4)
this model
Finetunes
5 models
Quantizations
2 models

Dataset used to train cyberagent/calm2-7b-chat-dpo-experimental