autoevaluator
HF staff
Add evaluation results on the default config and test split of billsum
54efacd
languages: en | |
license: | |
- apache-2.0 | |
- bsd-3-clause | |
datasets: | |
- kmfoda/booksum | |
tags: | |
- summarization | |
- summary | |
- booksum | |
- long-document | |
- long-form | |
metrics: | |
- rouge | |
widget: | |
- text: large earthquakes along a given fault segment do not occur at random intervals | |
because it takes time to accumulate the strain energy for the rupture. The rates | |
at which tectonic plates move and accumulate strain at their boundaries are approximately | |
uniform. Therefore, in first approximation, one may expect that large ruptures | |
of the same fault segment will occur at approximately constant time intervals. | |
If subsequent main shocks have different amounts of slip across the fault, then | |
the recurrence time may vary, and the basic idea of periodic mainshocks must be | |
modified. For great plate boundary ruptures the length and slip often vary by | |
a factor of 2. Along the southern segment of the San Andreas fault the recurrence | |
interval is 145 years with variations of several decades. The smaller the standard | |
deviation of the average recurrence interval, the more specific could be the long | |
term prediction of a future mainshock. | |
example_title: earthquakes | |
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\ | |
\ are fed into a neural network that predicts values in the reconstructed domain.\ | |
\ Then, this domain is mapped to the sensor domain where sensor measurements are\ | |
\ available as supervision. Class and Section Problems Addressed Generalization\ | |
\ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\ | |
\ Representations (Section 3) Computation & memory efficiency, representation\ | |
\ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\ | |
\ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\ | |
\ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\ | |
\ of techniques in the neural field toolbox each addresses problems that arise\ | |
\ in learning, inference, and control. (Section 3). We can supervise reconstruction\ | |
\ via differentiable forward maps that transform Or project our domain (e.g, 3D\ | |
\ reconstruction via 2D images; Section 4) With appropriate network architecture\ | |
\ choices, we can overcome neural network spectral biases (blurriness) and efficiently\ | |
\ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\ | |
\ fields to add constraints and regularizations, and to achieve editable representations\ | |
\ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\ | |
\ to help solve problems with neural fields There are three components in a conditional\ | |
\ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\ | |
\ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\ | |
\ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\ | |
\ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\ | |
\ field itself $. The encoder \u20AC finds the most probable z given the observations\ | |
\ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\ | |
\ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\ | |
\ schemes with different optimality guarantees (Section 2.1.1), both global and\ | |
\ local conditioning (Section 2.1.2), and different mapping functions Y (Section\ | |
\ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\ | |
\ shape given a partial or noisy point cloud. We need a suitable prior over the\ | |
\ sur- face in its reconstruction domain to generalize to the partial observations.\ | |
\ A neural network expresses a prior via the function space of its architecture\ | |
\ and parameters 0, and generalization is influenced by the inductive bias of\ | |
\ this function space (Section 5)." | |
example_title: scientific paper | |
- text: 'Is a else or outside the cob and tree written being of early client rope | |
and you have is for good reasons. On to the ocean in Orange for time. By''s the | |
aggregate we can bed it yet. Why this please pick up on a sort is do and also | |
M Getoi''s nerocos and do rain become you to let so is his brother is made in | |
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of | |
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might | |
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics. | |
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the | |
task for this class and you might have already seen me in the first lecture where | |
I made a quick appearance. I''m also going to give the tortillas in the last third | |
of this course. So to give you a little bit about me, I''m a old student here | |
with better Bulman and my research centres on casual inference applied to biomedical | |
disasters, so that could be genomics or that could be hospital data. If any of | |
you is interested in writing a bachelor thesis, a semester paper may be mastathesis | |
about this topic feel for reach out to me. you have my name on models and my email | |
address you can find in the directory I''d Be very happy to talk about it. you | |
do not need to be sure about it, we can just have a chat. So with that said, let''s | |
get on with the lecture. There''s an exciting topic today I''m going to start | |
by sharing some slides with you and later on during the lecture we''ll move to | |
the paper. So bear with me for a few seconds. Well, the projector is starting | |
up. Okay, so let''s get started. Today''s topic is a very important one. It''s | |
about a technique which really forms one of the fundamentals of data science, | |
machine learning, and any sort of modern statistics. It''s called cross validation. | |
I know you really want to understand this topic I Want you to understand this | |
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding | |
cross validation. So to set the stage for this, I Want to introduce you to the | |
validation problem in computational statistics. So the problem is the following: | |
You trained a model on available data. You fitted your model, but you know the | |
training data you got could always have been different and some data from the | |
environment. Maybe it''s a random process. You do not really know what it is, | |
but you know that somebody else who gets a different batch of data from the same | |
environment they would get slightly different training data and you do not care | |
that your method performs as well. On this training data. you want to to perform | |
well on other data that you have not seen other data from the same environment. | |
So in other words, the validation problem is you want to quantify the performance | |
of your model on data that you have not seen. So how is this even possible? How | |
could you possibly measure the performance on data that you do not know The solution | |
to? This is the following realization is that given that you have a bunch of data, | |
you were in charge. You get to control how much that your model sees. It works | |
in the following way: You can hide data firms model. Let''s say you have a training | |
data set which is a bunch of doubtless so X eyes are the features those are typically | |
hide and national vector. It''s got more than one dimension for sure. And the | |
why why eyes. Those are the labels for supervised learning. As you''ve seen before, | |
it''s the same set up as we have in regression. And so you have this training | |
data and now you choose that you only use some of those data to fit your model. | |
You''re not going to use everything, you only use some of it the other part you | |
hide from your model. And then you can use this hidden data to do validation from | |
the point of you of your model. This hidden data is complete by unseen. In other | |
words, we solve our problem of validation.' | |
example_title: transcribed audio - lecture | |
- text: "Transformer-based models have shown to be very useful for many NLP tasks.\ | |
\ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\ | |
\ time & memory complexity (where nn is sequence length). Hence, it's computationally\ | |
\ very expensive to apply transformer-based models on long sequences n > 512n>512.\ | |
\ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\ | |
\ try to remedy this problem by approximating the full attention matrix. You can\ | |
\ checkout \U0001F917's recent blog post in case you are unfamiliar with these\ | |
\ models.\nBigBird (introduced in paper) is one of such recent models to address\ | |
\ this issue. BigBird relies on block sparse attention instead of normal attention\ | |
\ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\ | |
\ much lower computational cost compared to BERT. It has achieved SOTA on various\ | |
\ tasks involving very long sequences such as long documents summarization, question-answering\ | |
\ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\ | |
Transformers. The goal of this post is to give the reader an in-depth understanding\ | |
\ of big bird implementation & ease one's life in using BigBird with \U0001F917\ | |
Transformers. But, before going into more depth, it is important to remember that\ | |
\ the BigBird's attention is an approximation of BERT's full attention and therefore\ | |
\ does not strive to be better than BERT's full attention, but rather to be more\ | |
\ efficient. It simply allows to apply transformer-based models to much longer\ | |
\ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\ | |
\ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\ | |
\ would be preferred over block sparse attention (which we are going to discuss\ | |
\ in this post).\nIf you wonder why we need more compute when working with longer\ | |
\ sequences, this blog post is just right for you!\nSome of the main questions\ | |
\ one might have when working with standard BERT-like attention include:\nDo all\ | |
\ tokens really have to attend to all other tokens? Why not compute attention\ | |
\ only over important tokens? How to decide what tokens are important? How to\ | |
\ attend to just a few tokens in a very efficient way? In this blog post, we will\ | |
\ try to answer those questions.\nWhat tokens should be attended to? We will give\ | |
\ a practical example of how attention works by considering the sentence 'BigBird\ | |
\ is now available in HuggingFace for extractive question answering'. In BERT-like\ | |
\ attention, every word would simply attend to all other tokens.\nLet's think\ | |
\ about a sensible choice of key tokens that a queried token actually only should\ | |
\ attend to by writing some pseudo-code. Will will assume that the token available\ | |
\ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\ | |
\ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\ | |
\ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\ | |
>>> # further let's assume, we're trying to understand the representation of 'available'\ | |
\ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\ | |
\ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\ | |
\ = [] # => currently 'available' token doesn't have anything to attend Nearby\ | |
\ tokens should be important because, in a sentence (sequence of words), the current\ | |
\ word is highly dependent on neighboring past & future tokens. This intuition\ | |
\ is the idea behind the concept of sliding attention." | |
example_title: bigbird blog intro | |
- text: "To be fair, you have to have a very high IQ to understand Rick and Morty.\ | |
\ The humour is extremely subtle, and without a solid grasp of theoretical physics\ | |
\ most of the jokes will go over a typical viewer's head. There's also Rick's\ | |
\ nihilistic outlook, which is deftly woven into his characterisation- his personal\ | |
\ philosophy draws heavily from Narodnaya Volya literature, for instance. The\ | |
\ fans understand this stuff; they have the intellectual capacity to truly appreciate\ | |
\ the depths of these jokes, to realise that they're not just funny- they say\ | |
\ something deep about LIFE. As a consequence people who dislike Rick & Morty\ | |
\ truly ARE idiots- of course they wouldn't appreciate, for instance, the humour\ | |
\ in Rick's existential catchphrase 'Wubba Lubba Dub Dub,' which itself is a cryptic\ | |
\ reference to Turgenev's Russian epic Fathers and Sons. I'm smirking right now\ | |
\ just imagining one of those addlepated simpletons scratching their heads in\ | |
\ confusion as Dan Harmon's genius wit unfolds itself on their television screens.\ | |
\ What fools.. how I pity them. \U0001F602\nAnd yes, by the way, i DO have a Rick\ | |
\ & Morty tattoo. And no, you cannot see it. It's for the ladies' eyes only- and\ | |
\ even then they have to demonstrate that they're within 5 IQ points of my own\ | |
\ (preferably lower) beforehand. Nothin personnel kid \U0001F60E" | |
example_title: Richard & Mortimer | |
parameters: | |
max_length: 64 | |
min_length: 4 | |
no_repeat_ngram_size: 3 | |
early_stopping: true | |
length_penalty: 0.3 | |
repetition_penalty: 3.5 | |
encoder_no_repeat_ngram_size: 3 | |
num_beams: 1 | |
model-index: | |
- name: pszemraj/pegasus-x-large-book-summary | |
results: | |
- task: | |
type: summarization | |
name: Summarization | |
dataset: | |
name: samsum | |
type: samsum | |
config: samsum | |
split: test | |
metrics: | |
- name: ROUGE-1 | |
type: rouge | |
value: 33.1401 | |
verified: true | |
- name: ROUGE-2 | |
type: rouge | |
value: 9.3095 | |
verified: true | |
- name: ROUGE-L | |
type: rouge | |
value: 24.8552 | |
verified: true | |
- name: ROUGE-LSUM | |
type: rouge | |
value: 29.0391 | |
verified: true | |
- name: loss | |
type: loss | |
value: 2.288182497024536 | |
verified: true | |
- name: gen_len | |
type: gen_len | |
value: 45.2173 | |
verified: true | |
- task: | |
type: summarization | |
name: Summarization | |
dataset: | |
name: launch/gov_report | |
type: launch/gov_report | |
config: plain_text | |
split: test | |
metrics: | |
- name: ROUGE-1 | |
type: rouge | |
value: 39.7279 | |
verified: true | |
- name: ROUGE-2 | |
type: rouge | |
value: 10.8944 | |
verified: true | |
- name: ROUGE-L | |
type: rouge | |
value: 19.7018 | |
verified: true | |
- name: ROUGE-LSUM | |
type: rouge | |
value: 36.5634 | |
verified: true | |
- name: loss | |
type: loss | |
value: 2.473011016845703 | |
verified: true | |
- name: gen_len | |
type: gen_len | |
value: 212.8243 | |
verified: true | |
- task: | |
type: summarization | |
name: Summarization | |
dataset: | |
name: billsum | |
type: billsum | |
config: default | |
split: test | |
metrics: | |
- name: ROUGE-1 | |
type: rouge | |
value: 42.1065 | |
verified: true | |
- name: ROUGE-2 | |
type: rouge | |
value: 15.4079 | |
verified: true | |
- name: ROUGE-L | |
type: rouge | |
value: 24.8814 | |
verified: true | |
- name: ROUGE-LSUM | |
type: rouge | |
value: 36.0375 | |
verified: true | |
- name: loss | |
type: loss | |
value: 1.9130958318710327 | |
verified: true | |
- name: gen_len | |
type: gen_len | |
value: 179.2184 | |
verified: true | |
# pszemraj/pegasus-x-large-book-summary | |
[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/pszemraj/6c326c0649233ab017d63adc36958d1a/pegasus-x-large-booksum-demo.ipynb) | |
Get SparkNotes-esque summaries of arbitrary text! Due to the model size, it's recommended to try it out in Colab (linked above) as the API textbox may time out. | |
This model is a fine-tuned version of [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) on the `kmfoda/booksum` dataset for approx eight epochs. | |
## Model description | |
More information needed | |
## Intended uses & limitations | |
- This seems to be the GPU-hungriest summarization model yet. | |
## Training and evaluation data | |
More information needed | |
## Training procedure | |
### Training hyperparameters | |
#### Epochs 1-4 | |
TODO | |
#### Epochs 5 & 6 | |
The following hyperparameters were used during training: | |
- learning_rate: 6e-05 | |
- train_batch_size: 4 | |
- eval_batch_size: 1 | |
- seed: 42 | |
- distributed_type: multi-GPU | |
- gradient_accumulation_steps: 32 | |
- total_train_batch_size: 128 | |
- optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas | |
- lr_scheduler_type: constant_with_warmup | |
- data type: TF32 | |
- num_epochs: 2 | |
#### Epochs 7 & 8 | |
- epochs 5 & 6 were trained with 12288 tokens input | |
- this fixes that with 2 epochs at 16384 tokens input | |
The following hyperparameters were used during training: | |
- learning_rate: 0.0004 | |
- train_batch_size: 4 | |
- eval_batch_size: 1 | |
- seed: 42 | |
- distributed_type: multi-GPU | |
- gradient_accumulation_steps: 16 | |
- total_train_batch_size: 64 | |
- optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas | |
- lr_scheduler_type: cosine | |
- lr_scheduler_warmup_ratio: 0.03 | |
- num_epochs: 2 | |
### Framework versions | |
- Transformers 4.22.0 | |
- Pytorch 1.11.0a0+17540c5 | |
- Datasets 2.4.0 | |
- Tokenizers 0.12.1 | |