A question generation model trained on SQuAD
dataset.
Example usage:
from transformers import BartConfig, BartForConditionalGeneration, BartTokenizer
model_name = "alinet/bart-base-squad-qg"
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
run_model("Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.", max_length=32, num_beams=4)
# ['What is the Stanford Question Answering Dataset?']
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train alinet/bart-base-squad-qg
Evaluation results
- BERTScore F1 on MRQAself-reported0.682
- BERTScore Precision on MRQAself-reported0.692
- BERTScore Recall on MRQAself-reported0.676
- BERTScore F1 on Spoken-SQuADself-reported0.604
- BERTScore Precision on Spoken-SQuADself-reported0.596
- BERTScore Recall on Spoken-SQuADself-reported0.615