Model Card
Example Usage
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained('jeffbritts/abstracts_to_post_model', revision=None) # Load tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('jeffbritts/abstracts_to_post_model', revision=None) # Load model
pipe = pipeline('text2text-generation', model=model, tokenizer=tokenizer, pad_token_id=tokenizer.pad_token_id)
inputs = ["Acknowledgments\n\nThank you to all our donors. Their input was invaluable, and many of them have kept this program active. I really appreciate some privacy concerns with these papers and the paper itself. However, thank you to my research team for helping get the entire research protocol up and running since 2010. It's been absolutely stunning for me to be a part of such a small organization, but when something like this happens, it is such a huge deal. It means it's hard not to get involved.\n\nYou will also get a new Open Science Foundation letter if you donate and support NLP. I know I am more than qualified to help you in any way you get involved. Thank you in advance.\n\nAs an additional thanks-good-ness, at the risk of repeating some of a large list, I will do an accompanying Google Hangout. The Hangout is where you can send an email at nlp-doc@umass-edu. In my time as a speaker, we'll do an ongoing Hangout video series and maybe even a live talk. The original YouTube channel is hosted here.\n\nIf you have any questions or concerns or would like to talk to a team member, write to my Open Science Committee through this website below or send your comments directly to me. Thanks."]
print(pipe(inputs, max_length=512, do_sample=False))
This model was trained with a synthetic dataset with DataDreamer 🤖💤. The synthetic dataset card and model card can be found here. The training arguments can be found here.
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for jeffbritts/abstracts_to_post_model
Base model
google/t5-v1_1-base