T5-base-ddg / README.md
Vijayendra's picture
Update README.md
d532d72 verified
|
raw
history blame
2.74 kB
metadata
library_name: transformers
tags: []

Model Card for Model ID

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

''' Python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration

Set the device (use GPU if available)

device = 'cuda' if torch.cuda.is_available() else 'cpu'

Load the model and tokenizer from Hugging Face

tokenizer = T5Tokenizer.from_pretrained("Vijayendra/T5-base-ddg") model = T5ForConditionalGeneration.from_pretrained("Vijayendra/T5-base-ddg").to(device)

Define your prompts

input_prompts = [ "I am having a bad day at work", "What should I do about my stress?", "How can I improve my productivity?", "I'm feeling very anxious today", "What is the best way to learn new skills?", "How do I deal with failure?", "What do you think about the future of technology?", "I want to improve my communication skills", "How can I stay motivated at work?", "What is the meaning of life?" ]

Generate responses

generated_responses = {} for prompt in input_prompts: inputs = tokenizer(prompt, return_tensors="pt", max_length=400, truncation=True, padding="max_length").to(device)

model.eval()
with torch.no_grad():
    generated_ids = model.generate(
        input_ids=inputs['input_ids'],
        attention_mask=inputs['attention_mask'],
        max_length=40,
        num_beams=7,
        repetition_penalty=2.5,
        length_penalty=2.0,
        early_stopping=True
    )

# Decode the generated response
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
generated_responses[prompt] = generated_text

Display the input prompts and the generated responses

for prompt, response in generated_responses.items(): print(f"Prompt: {prompt}") print(f"Response: {response}\n")