Edit model card

COMET-T5 ja

Finetuned T5 on ATOMIC ja using a text-to-text language modeling objective. It was introduced in this paper.

How to use

You can use this model directly with a pipeline for text2text generation. Since the generation relies on some randomness, we set a seed for reproducibility:

>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text2text-generation', model='nlp-waseda/comet-t5-base-japanese')
>>> set_seed(42)
>>> generator("次の出来事の後に起こりうることは何ですか: Xが友人に電話する", max_length=30, num_return_sequences=5, do_sample=True)

[{'generated_text': 'Xが友人から返事を得る'},
 {'generated_text': 'Xが会話する'},
 {'generated_text': 'Xが友人に怒られる'},
 {'generated_text': 'Xが退屈しそうな雰囲気になる'},
 {'generated_text': 'Xが友人と会う'}]

Preprocessing

The prompts are different for each relation:

Relation Prompt
xNeed 次の出来事に必要な前提条件は何ですか:
xEffect 次の出来事の後に起こりうることは何ですか:
xIntent 次の出来事が起こった動機は何ですか:
xReact 次の出来事の後に感じることは何ですか:

Evaluation results

The model achieves the following results:

BLEU BERTScore
39.85 82.37

BibTeX entry and citation info

@InProceedings{ide_nlp2023_event,
    author =    "井手竜也 and 村田栄樹 and 堀尾海斗 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀",
    title =     "人間と言語モデルに対するプロンプトを用いたゼロからのイベント常識知識グラフ構築",
    booktitle = "言語処理学会第29回年次大会",
    year =      "2023",
    url =       "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf"
}
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including nlp-waseda/comet-t5-base-japanese