Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,25 @@ Fine-tuned on a synthetic dataset of curated long-context text and `GPT-3.5-turb
|
|
25 |
|
26 |
Try it in [gradio demo](https://huggingface.co/spaces/pszemraj/document-summarization) | [.md with example outputs](evals-outputs/GAUNTLET.md) (gauntlet)
|
27 |
|
28 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the None dataset.
|
31 |
It achieves the following results on the evaluation set:
|
|
|
25 |
|
26 |
Try it in [gradio demo](https://huggingface.co/spaces/pszemraj/document-summarization) | [.md with example outputs](evals-outputs/GAUNTLET.md) (gauntlet)
|
27 |
|
28 |
+
## Usage
|
29 |
+
|
30 |
+
It's recommended to use this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If interested, you can also use the `textsum` [util repo](https://github.com/pszemraj/textsum) to have most of this abstracted out for you:
|
31 |
+
|
32 |
+
```bash
|
33 |
+
pip install -U textsum
|
34 |
+
```
|
35 |
+
|
36 |
+
```python
|
37 |
+
from textsum.summarize import Summarizer
|
38 |
+
|
39 |
+
model_name = "pszemraj/long-t5-tglobal-base-synthsumm_direct"
|
40 |
+
summarizer = Summarizer(model_name) # GPU auto-detected
|
41 |
+
text = "put the text you don't want to read here"
|
42 |
+
summary = summarizer.summarize_string(text)
|
43 |
+
print(summary)
|
44 |
+
```
|
45 |
+
|
46 |
+
## Details
|
47 |
|
48 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the None dataset.
|
49 |
It achieves the following results on the evaluation set:
|