Vijayendra commited on
Commit
502a2a0
1 Parent(s): d532d72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -34
README.md CHANGED
@@ -1,40 +1,23 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
4
  ---
 
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
- ''' Python
38
  import torch
39
  from transformers import T5Tokenizer, T5ForConditionalGeneration
40
 
@@ -60,7 +43,6 @@ input_prompts = [
60
  ]
61
 
62
  # Generate responses
63
-
64
  generated_responses = {}
65
  for prompt in input_prompts:
66
  inputs = tokenizer(prompt, return_tensors="pt", max_length=400, truncation=True, padding="max_length").to(device)
@@ -82,8 +64,6 @@ for prompt in input_prompts:
82
  generated_responses[prompt] = generated_text
83
 
84
  # Display the input prompts and the generated responses
85
-
86
  for prompt, response in generated_responses.items():
87
  print(f"Prompt: {prompt}")
88
  print(f"Response: {response}\n")
89
-
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - google-t5/t5-base
7
+ datasets:
8
+ - abisee/cnn_dailymail
9
+ metrics:
10
+ - rouge
11
  ---
12
+ # T5-Base-Sum
13
 
14
+ This model is a fine-tuned version of `T5` for summarization tasks. It was trained on various articles and is hosted on Hugging Face for easy access and use.
15
 
16
+ ## Model Usage
17
 
18
+ Below is an example of how to load and use this model for summarization:
19
 
20
+ ```python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  import torch
22
  from transformers import T5Tokenizer, T5ForConditionalGeneration
23
 
 
43
  ]
44
 
45
  # Generate responses
 
46
  generated_responses = {}
47
  for prompt in input_prompts:
48
  inputs = tokenizer(prompt, return_tensors="pt", max_length=400, truncation=True, padding="max_length").to(device)
 
64
  generated_responses[prompt] = generated_text
65
 
66
  # Display the input prompts and the generated responses
 
67
  for prompt, response in generated_responses.items():
68
  print(f"Prompt: {prompt}")
69
  print(f"Response: {response}\n")