T5-base-ddg / README.md
Vijayendra's picture
Update README.md
502a2a0 verified
|
raw
history blame
2.16 kB
metadata
license: mit
language:
  - en
base_model:
  - google-t5/t5-base
datasets:
  - abisee/cnn_dailymail
metrics:
  - rouge

T5-Base-Sum

This model is a fine-tuned version of T5 for summarization tasks. It was trained on various articles and is hosted on Hugging Face for easy access and use.

Model Usage

Below is an example of how to load and use this model for summarization:

import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration

# Set the device (use GPU if available)
device = 'cuda' if torch.cuda.is_available() else 'cpu'

# Load the model and tokenizer from Hugging Face
tokenizer = T5Tokenizer.from_pretrained("Vijayendra/T5-base-ddg")
model = T5ForConditionalGeneration.from_pretrained("Vijayendra/T5-base-ddg").to(device)

# Define your prompts
input_prompts = [
    "I am having a bad day at work",
    "What should I do about my stress?",
    "How can I improve my productivity?",
    "I'm feeling very anxious today",
    "What is the best way to learn new skills?",
    "How do I deal with failure?",
    "What do you think about the future of technology?",
    "I want to improve my communication skills",
    "How can I stay motivated at work?",
    "What is the meaning of life?"
]

# Generate responses
generated_responses = {}
for prompt in input_prompts:
    inputs = tokenizer(prompt, return_tensors="pt", max_length=400, truncation=True, padding="max_length").to(device)
    
    model.eval()
    with torch.no_grad():
        generated_ids = model.generate(
            input_ids=inputs['input_ids'],
            attention_mask=inputs['attention_mask'],
            max_length=40,
            num_beams=7,
            repetition_penalty=2.5,
            length_penalty=2.0,
            early_stopping=True
        )

    # Decode the generated response
    generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
    generated_responses[prompt] = generated_text

# Display the input prompts and the generated responses
for prompt, response in generated_responses.items():
    print(f"Prompt: {prompt}")
    print(f"Response: {response}\n")