NegBLEURT / README.md
MiriUll's picture
Update README.md
936eb80
|
raw
history blame
1.76 kB
metadata
license: mit
language:
  - en
pipeline_tag: text-classification

Model Card for Model NegBLEURT

This model is a negation-aware version of the BLEURT metric for evaluation of generated text.

Direct Use

from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "tum-nlp/NegBLEURT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

references = ["Ray Charles is legendary.", "Ray Charles is legendary."]
candidates = ["Ray Charles is a legend.", "Ray Charles isn’t legendary."]

tokenized = tokenizer(references, cadidates, return_tensors='pt', padding=True)
print(model(**tokenized).logits)
# returns scores 0.8409 and 0.2601 for the two candidates

Use with pipeline

from transformers import pipeline

pipe = pipeline("text-classification", model="tum-nlp/NegBLEURT")
pairwise_input = [
  [["Ray Charles is legendary.", "Ray Charles is a legend."]],
  [["Ray Charles is legendary.", "Ray Charles isn’t legendary."]]
]
print(pipe(pairwise_input, function_to_apply="none"))
# returns [{'label': 'NegBLEURT_score', 'score': 0.8408917784690857}, {'label': 'NegBLEURT_score', 'score': 0.26007288694381714}]

Training Details

The model is a fine-tuned version of the bleurt-tiny checkpoint from the official BLUERT repository. It was fine-tuned on the CANNOT dataset's train split for 500 steps using the fine-tuning script provided by BLEURT.

Citation [optional]

Please cite our INLG 2023 paper, if you use our model. BibTeX: tba