Search is not available for this dataset
Models
The base classes [PreTrainedModel], [TFPreTrainedModel], and
[FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which
are common among all the models to:
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model.
The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin]
(for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or
for text generation, [~generation.GenerationMixin] (for the PyTorch models),
[~generation.TFGenerationMixin] (for the TensorFlow models) and
[~generation.FlaxGenerationMixin] (for the Flax/JAX models).
PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
Large model loading
In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded.
This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only.
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it:
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
You can inspect how the model was split across devices by looking at its hf_device_map attribute:
py
t0pp.hf_device_map
python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory):
python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below.
Model Instantiation dtype
Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
either explicitly pass the desired dtype using torch_dtype argument:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto",
and then dtype will be automatically derived from the model's weights:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
Models instantiated from scratch can also be told which dtype to use with:
python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
Due to Pytorch design, this functionality is only available for floating dtypes.
ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
stringlengths 196
74k
โ |
---|
Perplexity of fixed-length models
[[open-in-colab]]
Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note
that the metric applies specifically to classical language models (sometimes called autoregressive or causal language
models) and is not well defined for masked language models like BERT (see summary of the models).
Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized
sequence \(X = (x_0, x_1, \dots, x_t)\), then the perplexity of \(X\) is,
$$\text{PPL}(X) = \exp \left{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right}$$
where \(\log p_\theta (x_i|x_{<i})\) is the log-likelihood of the ith token conditioned on the preceding tokens \(x_{<i}\) according to our model. Intuitively, it can be thought of as an evaluation of the model's ability to predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization procedure has a direct impact on a model's perplexity which should always be taken into consideration when comparing different models.
This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more
intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this
fantastic blog post on The Gradient.
Calculating PPL with fixed-length models
If we weren't limited by a model's context size, we would evaluate the model's perplexity by autoregressively
factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below.
When working with approximate models, however, we typically have a constraint on the number of tokens the model can
process. The largest version of GPT-2, for example, has a fixed length of 1024 tokens, so we
cannot calculate \(p_\theta(x_t|x_{<t})\) directly when \(t\) is greater than 1024.
Instead, the sequence is typically broken into subsequences equal to the model's maximum input size. If a model's max
input size is \(k\), we then approximate the likelihood of a token \(x_t\) by conditioning only on the
\(k-1\) tokens that precede it rather than the entire context. When evaluating the model's perplexity of a
sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed
log-likelihoods of each segment independently.
This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor
approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will
have less context at most of the prediction steps.
Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly
sliding the context window so that the model has more context when making each prediction.
This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more
favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good
practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by
1 token a time. This allows computation to proceed much faster while still giving the model a large context to make
predictions at each step.
Example: Calculating perplexity with GPT-2 in ๐ค Transformers
Let's demonstrate this process with GPT-2.
thon
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
device = "cuda"
model_id = "gpt2-large"
model = GPT2LMHeadModel.from_pretrained(model_id).to(device)
tokenizer = GPT2TokenizerFast.from_pretrained(model_id)
We'll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since
this dataset is small and we're just doing one forward pass over the set, we can just load and encode the entire
dataset in memory.
thon
from datasets import load_dataset
test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test")
encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt")
With ๐ค Transformers, we can simply pass the input_ids as the labels to our model, and the average negative
log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in
the tokens we pass to the model at each iteration. We don't want the log-likelihood for the tokens we're just treating
as context to be included in our loss, so we can set these targets to -100 so that they are ignored. The following
is an example of how we could do this with a stride of 512. This means that the model will have at least 512 tokens
for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens
available to condition on).
thon
import torch
from tqdm import tqdm
max_length = model.config.n_positions
stride = 512
seq_len = encodings.input_ids.size(1)
nlls = []
prev_end_loc = 0
for begin_loc in tqdm(range(0, seq_len, stride)):
end_loc = min(begin_loc + max_length, seq_len)
trg_len = end_loc - prev_end_loc # may be different from stride on last loop
input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device)
target_ids = input_ids.clone()
target_ids[:, :-trg_len] = -100
with torch.no_grad():
outputs = model(input_ids, labels=target_ids)
# loss is calculated using CrossEntropyLoss which averages over valid labels
# N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels
# to the left by 1.
neg_log_likelihood = outputs.loss
nlls.append(neg_log_likelihood)
prev_end_loc = end_loc
if end_loc == seq_len:
break
ppl = torch.exp(torch.stack(nlls).mean())
Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window
strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction,
and the better the reported perplexity will typically be.
When we run the above with stride = 1024, i.e. no overlap, the resulting PPL is 19.44, which is about the same
as the 19.93 reported in the GPT-2 paper. By using stride = 512 and thereby employing our striding window
strategy, this jumps down to 16.45. This is not only a more favorable score, but is calculated in a way that is
closer to the true autoregressive decomposition of a sequence likelihood. |
Before you begin, make sure you have all the necessary libraries installed:
pip install -q datasets transformers evaluate
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
from huggingface_hub import notebook_login
notebook_login()
Load SceneParse150 dataset
Start by loading a smaller subset of the SceneParse150 dataset from the ๐ค Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from datasets import load_dataset
ds = load_dataset("scene_parse_150", split="train[:50]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
ds = ds.train_test_split(test_size=0.2)
train_ds = ds["train"]
test_ds = ds["test"]
Then take a look at an example:
train_ds[0]
{'image': ,
'annotation': ,
'scene_category': 368}
image: a PIL image of the scene.
annotation: a PIL image of the segmentation map, which is also the model's target.
scene_category: a category id that describes the image scene like "kitchen" or "office". In this guide, you'll only need image and annotation, both of which are PIL images.
You'll also want to create a dictionary that maps a label id to a label class which will be useful when you set up the model later. Download the mappings from the Hub and create the id2label and label2id dictionaries:
import json
from huggingface_hub import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-id2label.json"
id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label)
Preprocess
The next step is to load a SegFormer image processor to prepare the images and annotations for the model. Some datasets, like this one, use the zero-index as the background class. However, the background class isn't actually included in the 150 classes, so you'll need to set reduce_labels=True to subtract one from all the labels. The zero-index is replaced by 255 so it's ignored by SegFormer's loss function:
from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you'll use the ColorJitter function from torchvision to randomly change the color properties of an image, but you can also use any image library you like.
from torchvision.transforms import ColorJitter
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
Now create two preprocessing functions to prepare the images and annotations for the model. These functions convert the images into pixel_values and annotations to labels. For the training set, jitter is applied before providing the images to the image processor. For the test set, the image processor crops and normalizes the images, and only crops the labels because no data augmentation is applied during testing.
def train_transforms(example_batch):
images = [jitter(x) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [x for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
To apply the jitter over the entire dataset, use the ๐ค Datasets [~datasets.Dataset.set_transform] function. The transform is applied on the fly which is faster and consumes less disk space:
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting.
In this guide, you'll use tf.image to randomly change the color properties of an image, but you can also use any image
library you like.
Define two separate transformation functions:
- training data transformations that include image augmentation
- validation data transformations that only transpose the images, since computer vision models in ๐ค Transformers expect channels-first layout
import tensorflow as tf
def aug_transforms(image):
image = tf.keras.utils.img_to_array(image)
image = tf.image.random_brightness(image, 0.25)
image = tf.image.random_contrast(image, 0.5, 2.0)
image = tf.image.random_saturation(image, 0.75, 1.25)
image = tf.image.random_hue(image, 0.1)
image = tf.transpose(image, (2, 0, 1))
return image
def transforms(image):
image = tf.keras.utils.img_to_array(image)
image = tf.transpose(image, (2, 0, 1))
return image
Next, create two preprocessing functions to prepare batches of images and annotations for the model. These functions apply
the image transformations and use the earlier loaded image_processor to convert the images into pixel_values and
annotations to labels. ImageProcessor also takes care of resizing and normalizing the images.
def train_transforms(example_batch):
images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [transforms(x.convert("RGB")) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
To apply the preprocessing transformations over the entire dataset, use the ๐ค Datasets [~datasets.Dataset.set_transform] function.
The transform is applied on the fly which is faster and consumes less disk space:
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the ๐ค Evaluate library. For this task, load the mean Intersection over Union (IoU) metric (see the ๐ค Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
metric = evaluate.load("mean_iou")
Then create a function to [~evaluate.EvaluationModule.compute] the metrics. Your predictions need to be converted to
logits first, and then reshaped to match the size of the labels before you can call [~evaluate.EvaluationModule.compute]:
def compute_metrics(eval_pred):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
logits_tensor = nn.functional.interpolate(
logits_tensor,
size=labels.shape[-2:],
mode="bilinear",
align_corners=False,
).argmax(dim=1)
pred_labels = logits_tensor.detach().cpu().numpy()
metrics = metric.compute(
predictions=pred_labels,
references=labels,
num_labels=num_labels,
ignore_index=255,
reduce_labels=False,
)
for key, value in metrics.items():
if type(value) is np.ndarray:
metrics[key] = value.tolist()
return metrics
def compute_metrics(eval_pred):
logits, labels = eval_pred
logits = tf.transpose(logits, perm=[0, 2, 3, 1])
logits_resized = tf.image.resize(
logits,
size=tf.shape(labels)[1:],
method="bilinear",
)
pred_labels = tf.argmax(logits_resized, axis=-1)
metrics = metric.compute(
predictions=pred_labels,
references=labels,
num_labels=num_labels,
ignore_index=-1,
reduce_labels=image_processor.do_reduce_labels,
)
per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
per_category_iou = metrics.pop("per_category_iou").tolist()
metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
return {"val_" + k: v for k, v in metrics.items()}
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load SegFormer with [AutoModelForSemanticSegmentation], and pass the model the mapping between label ids and label classes:
from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id)
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. It is important you don't remove unused columns because this'll drop the image column. Without the image column, you can't create pixel_values. Set remove_unused_columns=False to prevent this behavior! The only other required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the IoU metric and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model.
training_args = TrainingArguments(
output_dir="segformer-b0-scene-parse-150",
learning_rate=6e-5,
num_train_epochs=50,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
save_total_limit=3,
evaluation_strategy="steps",
save_strategy="steps",
save_steps=20,
eval_steps=20,
logging_steps=1,
eval_accumulation_steps=5,
remove_unused_columns=False,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
trainer.train()
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you are unfamiliar with fine-tuning a model with Keras, check out the basic tutorial first!
To fine-tune a model in TensorFlow, follow these steps:
1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule.
2. Instantiate a pretrained model.
3. Convert a ๐ค Dataset to a tf.data.Dataset.
4. Compile your model.
5. Add callbacks to calculate metrics and upload your model to ๐ค Hub
6. Use the fit() method to run the training.
Start by defining the hyperparameters, optimizer and learning rate schedule:
from transformers import create_optimizer
batch_size = 2
num_epochs = 50
num_train_steps = len(train_ds) * num_epochs
learning_rate = 6e-5
weight_decay_rate = 0.01
optimizer, lr_schedule = create_optimizer(
init_lr=learning_rate,
num_train_steps=num_train_steps,
weight_decay_rate=weight_decay_rate,
num_warmup_steps=0,
)
Then, load SegFormer with [TFAutoModelForSemanticSegmentation] along with the label mappings, and compile it with the
optimizer. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
from transformers import TFAutoModelForSemanticSegmentation
model = TFAutoModelForSemanticSegmentation.from_pretrained(
checkpoint,
id2label=id2label,
label2id=label2id,
)
model.compile(optimizer=optimizer) # No loss argument!
Convert your datasets to the tf.data.Dataset format using the [~datasets.Dataset.to_tf_dataset] and the [DefaultDataCollator]:
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = train_ds.to_tf_dataset(
columns=["pixel_values", "label"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
tf_eval_dataset = test_ds.to_tf_dataset(
columns=["pixel_values", "label"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
To compute the accuracy from the predictions and push your model to the ๐ค Hub, use Keras callbacks.
Pass your compute_metrics function to [KerasMetricCallback],
and use the [PushToHubCallback] to upload the model:
from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
metric_callback = KerasMetricCallback(
metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"]
)
push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor)
callbacks = [metric_callback, push_to_hub_callback]
Finally, you are ready to train your model! Call fit() with your training and validation datasets, the number of epochs,
and your callbacks to fine-tune the model:
model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
callbacks=callbacks,
epochs=num_epochs,
)
Congratulations! You have fine-tuned your model and shared it on the ๐ค Hub. You can now use it for inference!
Inference
Great, now that you've finetuned a model, you can use it for inference!
Load an image for inference:
image = ds[0]["image"]
image
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for image segmentation with your model, and pass your image to it:
from transformers import pipeline
segmenter = pipeline("image-segmentation", model="my_awesome_seg_model")
segmenter(image)
[{'score': None,
'label': 'wall',
'mask': },
{'score': None,
'label': 'sky',
'mask': },
{'score': None,
'label': 'floor',
'mask': },
{'score': None,
'label': 'ceiling',
'mask': },
{'score': None,
'label': 'bed ',
'mask': },
{'score': None,
'label': 'windowpane',
'mask': },
{'score': None,
'label': 'cabinet',
'mask': },
{'score': None,
'label': 'chair',
'mask': },
{'score': None,
'label': 'armchair',
'mask': }]
You can also manually replicate the results of the pipeline if you'd like. Process the image with an image processor and place the pixel_values on a GPU:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # use GPU if available, otherwise use a CPU
encoding = image_processor(image, return_tensors="pt")
pixel_values = encoding.pixel_values.to(device)
Pass your input to the model and return the logits:
outputs = model(pixel_values=pixel_values)
logits = outputs.logits.cpu()
Next, rescale the logits to the original image size:
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
Load an image processor to preprocess the image and return the input as TensorFlow tensors:
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation")
inputs = image_processor(image, return_tensors="tf")
Pass your input to the model and return the logits:
from transformers import TFAutoModelForSemanticSegmentation
model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation")
logits = model(**inputs).logits
Next, rescale the logits to the original image size and apply argmax on the class dimension:
logits = tf.transpose(logits, [0, 2, 3, 1])
upsampled_logits = tf.image.resize(
logits,
# We reverse the shape of image because image.size returns width and height.
image.size[::-1],
)
pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0]
To visualize the results, load the dataset color palette as ade_palette() that maps each class to their RGB values. Then you can combine and plot your image and the predicted segmentation map:
import matplotlib.pyplot as plt
import numpy as np
color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
palette = np.array(ade_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[, ::-1] # convert to BGR
img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
img = img.astype(np.uint8)
plt.figure(figsize=(15, 10))
plt.imshow(img)
plt.show()
|
Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
from huggingface_hub import notebook_login
notebook_login()
Load SWAG dataset
Start by loading the regular configuration of the SWAG dataset from the ๐ค Datasets library:
from datasets import load_dataset
swag = load_dataset("swag", "regular")
Then take a look at an example:
swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'}
While it looks like there are a lot of fields here, it is actually pretty straightforward:
sent1 and sent2: these fields show how a sentence starts, and if you put the two together, you get the startphrase field.
ending: suggests a possible ending for how a sentence can end, but only one of them is correct.
label: identifies the correct sentence ending.
Preprocess
The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
The preprocessing function you want to create needs to:
Make four copies of the sent1 field and combine each of them with sent2 to recreate how a sentence starts.
Combine sent2 with each of the four possible sentence endings.
Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding input_ids, attention_mask, and labels field.
ending_names = ["ending0", "ending1", "ending2", "ending3"]
def preprocess_function(examples):
first_sentences = [[context] * 4 for context in examples["sent1"]]
question_headers = examples["sent2"]
second_sentences = [
[f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
]
first_sentences = sum(first_sentences, [])
second_sentences = sum(second_sentences, [])
tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}
To apply the preprocessing function over the entire dataset, use ๐ค Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once:
py
tokenized_swag = swag.map(preprocess_function, batched=True)
๐ค Transformers doesn't have a data collator for multiple choice, so you'll need to adapt the [DataCollatorWithPadding] to create a batch of examples. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
DataCollatorForMultipleChoice flattens all the model inputs, applies padding, and then unflattens the results:
from dataclasses import dataclass
from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
from typing import Optional, Union
import torch
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def call(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="pt",
)
batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
batch["labels"] = torch.tensor(labels, dtype=torch.int64)
return batch
</pt>
<tf>py
from dataclasses import dataclass
from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
from typing import Optional, Union
import tensorflow as tf
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def call(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="tf",
)
batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
return batch
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the ๐ค Evaluate library. For this task, load the accuracy metric (see the ๐ค Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
accuracy = evaluate.load("accuracy")
Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy:
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load BERT with [AutoModelForMultipleChoice]:
from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased")
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the accuracy and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model.
training_args = TrainingArguments(
output_dir="my_awesome_swag_model",
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
learning_rate=5e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_swag["train"],
eval_dataset=tokenized_swag["validation"],
tokenizer=tokenizer,
data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
compute_metrics=compute_metrics,
)
trainer.train()
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer
batch_size = 16
num_train_epochs = 2
total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
Then you can load BERT with [TFAutoModelForMultipleChoice]:
from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased")
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
tf_train_set = model.prepare_tf_dataset(
tokenized_swag["train"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
tf_validation_set = model.prepare_tf_dataset(
tokenized_swag["validation"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
)
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
model.compile(optimizer=optimizer) # No loss argument!
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks.
Pass your compute_metrics function to [~transformers.KerasMetricCallback]:
from transformers.keras_callbacks import KerasMetricCallback
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
push_to_hub_callback = PushToHubCallback(
output_dir="my_awesome_model",
tokenizer=tokenizer,
)
Then bundle your callbacks together:
callbacks = [metric_callback, push_to_hub_callback]
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding
PyTorch notebook
or TensorFlow notebook.
Inference
Great, now that you've finetuned a model, you can use it for inference!
Come up with some text and two candidate answers:
prompt = "France has a bread law, Le Dรฉcret Pain, with strict rules on what is allowed in a traditional baguette."
candidate1 = "The law does not apply to croissants and brioche."
candidate2 = "The law applies to baguettes."
Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some labels:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
labels = torch.tensor(0).unsqueeze(0)
Pass your inputs and labels to the model and return the logits:
from transformers import AutoModelForMultipleChoice
model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
logits = outputs.logits
Get the class with the highest probability:
predicted_class = logits.argmax().item()
predicted_class
'0'
Tokenize each prompt and candidate answer pair and return TensorFlow tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True)
Pass your inputs to the model and return the logits:
from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
outputs = model(inputs)
logits = outputs.logits
Get the class with the highest probability:
predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
predicted_class
'0'
|
Utilities for pipelines
This page lists all the utility functions the library provides for pipelines.
Most of those are only useful if you are studying the code of the models in the library.
Argument handling
[[autodoc]] pipelines.ArgumentHandler
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
Data format
[[autodoc]] pipelines.PipelineDataFormat
[[autodoc]] pipelines.CsvPipelineDataFormat
[[autodoc]] pipelines.JsonPipelineDataFormat
[[autodoc]] pipelines.PipedPipelineDataFormat
Utilities
[[autodoc]] pipelines.PipelineException |
Train with a script
Along with the ๐ค Transformers notebooks, there are also example scripts demonstrating how to train a model for a task with PyTorch, TensorFlow, or JAX/Flax.
You will also find scripts we've used in our research projects and legacy examples which are mostly community contributed. These scripts are not actively maintained and require a specific version of ๐ค Transformers that will most likely be incompatible with the latest version of the library.
The example scripts are not expected to work out-of-the-box on every problem, and you may need to adapt the script to the problem you're trying to solve. To help you with this, most of the scripts fully expose how data is preprocessed, allowing you to edit it as necessary for your use case.
For any feature you'd like to implement in an example script, please discuss it on the forum or in an issue before submitting a Pull Request. While we welcome bug fixes, it is unlikely we will merge a Pull Request that adds more functionality at the cost of readability.
This guide will show you how to run an example summarization training script in PyTorch and TensorFlow. All examples are expected to work with both frameworks unless otherwise specified.
Setup
To successfully run the latest version of the example scripts, you have to install ๐ค Transformers from source in a new virtual environment:
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
For older versions of the example scripts, click on the toggle below:
Examples for older versions of ๐ค Transformers
v4.5.1
v4.4.2
v4.3.3
v4.2.2
v4.1.1
v4.0.1
v3.5.1
v3.4.0
v3.3.1
v3.2.0
v3.1.0
v3.0.2
v2.11.0
v2.10.0
v2.9.1
v2.8.0
v2.7.0
v2.6.0
v2.5.1
v2.4.0
v2.3.0
v2.2.0
v2.1.1
v2.0.0
v1.2.0
v1.1.0
v1.0.0
Then switch your current clone of ๐ค Transformers to a specific version, like v3.5.1 for example:
git checkout tags/v3.5.1
After you've setup the correct library version, navigate to the example folder of your choice and install the example specific requirements:
pip install -r requirements.txt
Run a script
The example script downloads and preprocesses a dataset from the ๐ค Datasets library. Then the script fine-tunes a dataset with the Trainer on an architecture that supports summarization. The following example shows how to fine-tune T5-small on the CNN/DailyMail dataset. The T5 model requires an additional source_prefix argument due to how it was trained. This prompt lets T5 know this is a summarization task.
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
The example script downloads and preprocesses a dataset from the ๐ค Datasets library. Then the script fine-tunes a dataset using Keras on an architecture that supports summarization. The following example shows how to fine-tune T5-small on the CNN/DailyMail dataset. The T5 model requires an additional source_prefix argument due to how it was trained. This prompt lets T5 know this is a summarization task.
python examples/tensorflow/summarization/run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
Distributed training and mixed precision
The Trainer supports distributed training and mixed precision, which means you can also use it in a script. To enable both of these features:
Add the fp16 argument to enable mixed precision.
Set the number of GPUs to use with the nproc_per_node argument.
python -m torch.distributed.launch \
--nproc_per_node 8 pytorch/summarization/run_summarization.py \
--fp16 \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
TensorFlow scripts utilize a MirroredStrategy for distributed training, and you don't need to add any additional arguments to the training script. The TensorFlow script will use multiple GPUs by default if they are available.
Run a script on a TPU
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. PyTorch supports TPUs with the XLA deep learning compiler (see here for more details). To use a TPU, launch the xla_spawn.py script and use the num_cores argument to set the number of TPU cores you want to use.
python xla_spawn.py --num_cores 8 \
summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. TensorFlow scripts utilize a TPUStrategy for training on TPUs. To use a TPU, pass the name of the TPU resource to the tpu argument.
python run_summarization.py \
--tpu name_of_tpu_resource \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
Run a script with ๐ค Accelerate
๐ค Accelerate is a PyTorch-only library that offers a unified method for training a model on several types of setups (CPU-only, multiple GPUs, TPUs) while maintaining complete visibility into the PyTorch training loop. Make sure you have ๐ค Accelerate installed if you don't already have it:
Note: As Accelerate is rapidly developing, the git version of accelerate must be installed to run the scripts
pip install git+https://github.com/huggingface/accelerate
Instead of the run_summarization.py script, you need to use the run_summarization_no_trainer.py script. ๐ค Accelerate supported scripts will have a task_no_trainer.py file in the folder. Begin by running the following command to create and save a configuration file:
accelerate config
Test your setup to make sure it is configured correctly:
accelerate test
Now you are ready to launch the training:
accelerate launch run_summarization_no_trainer.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
Use a custom dataset
The summarization script supports custom datasets as long as they are a CSV or JSON Line file. When you use your own dataset, you need to specify several additional arguments:
train_file and validation_file specify the path to your training and validation files.
text_column is the input text to summarize.
summary_column is the target text to output.
A summarization script using a custom dataset would look like this:
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--train_file path_to_csv_or_jsonlines_file \
--validation_file path_to_csv_or_jsonlines_file \
--text_column text_column_name \
--summary_column summary_column_name \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--overwrite_output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--predict_with_generate
Test a script
It is often a good idea to run your script on a smaller number of dataset examples to ensure everything works as expected before committing to an entire dataset which may take hours to complete. Use the following arguments to truncate the dataset to a maximum number of samples:
max_train_samples
max_eval_samples
max_predict_samples
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
Not all example scripts support the max_predict_samples argument. If you aren't sure whether your script supports this argument, add the -h argument to check:
examples/pytorch/summarization/run_summarization.py -h
Resume training from checkpoint
Another helpful option to enable is resuming training from a previous checkpoint. This will ensure you can pick up where you left off without starting over if your training gets interrupted. There are two methods to resume training from a checkpoint.
The first method uses the output_dir previous_output_dir argument to resume training from the latest checkpoint stored in output_dir. In this case, you should remove overwrite_output_dir:
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--output_dir previous_output_dir \
--predict_with_generate
The second method uses the resume_from_checkpoint path_to_specific_checkpoint argument to resume training from a specific checkpoint folder.
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--resume_from_checkpoint path_to_specific_checkpoint \
--predict_with_generate
Share your model
All scripts can upload your final model to the Model Hub. Make sure you are logged into Hugging Face before you begin:
huggingface-cli login
Then add the push_to_hub argument to the script. This argument will create a repository with your Hugging Face username and the folder name specified in output_dir.
To give your repository a specific name, use the push_to_hub_model_id argument to add it. The repository will be automatically listed under your namespace.
The following example shows how to upload a model with a specific repository name:
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--push_to_hub \
--push_to_hub_model_id finetuned-t5-cnn_dailymail \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate |
Padding and truncation
Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special padding token to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences.
In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to are: padding, truncation and max_length.
The padding argument controls padding. It can be a boolean or a string:
True or 'longest': pad to the longest sequence in the batch (no padding is applied if you only provide
a single sequence).
'max_length': pad to a length specified by the max_length argument or the maximum length accepted
by the model if no max_length is provided (max_length=None). Padding will still be applied if you only provide a single sequence.
False or 'do_not_pad': no padding is applied. This is the default behavior.
The truncation argument controls truncation. It can be a boolean or a string:
True or 'longest_first': truncate to a maximum length specified by the max_length argument or
the maximum length accepted by the model if no max_length is provided (max_length=None). This will
truncate token by token, removing a token from the longest sequence in the pair until the proper length is
reached.
'only_second': truncate to a maximum length specified by the max_length argument or the maximum
length accepted by the model if no max_length is provided (max_length=None). This will only truncate
the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided.
'only_first': truncate to a maximum length specified by the max_length argument or the maximum
length accepted by the model if no max_length is provided (max_length=None). This will only truncate
the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided.
False or 'do_not_truncate': no truncation is applied. This is the default behavior.
The max_length argument controls the length of the padding and truncation. It can be an integer or None, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to max_length is deactivated.
The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace truncation=True by a STRATEGY selected in
['only_first', 'only_second', 'longest_first'], i.e. truncation='only_second' or truncation='longest_first' to control how both sequences in the pair are truncated as detailed before.
| Truncation | Padding | Instruction |
|--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------|
| no truncation | no padding | tokenizer(batch_sentences) |
| | padding to max sequence in batch | tokenizer(batch_sentences, padding=True) or |
| | | tokenizer(batch_sentences, padding='longest') |
| | padding to max model input length | tokenizer(batch_sentences, padding='max_length') |
| | padding to specific length | tokenizer(batch_sentences, padding='max_length', max_length=42) |
| | padding to a multiple of a value | tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) |
| truncation to max model input length | no padding |tokenizer(batch_sentences, truncation=True)or |
| | |tokenizer(batch_sentences, truncation=STRATEGY)|
| | padding to max sequence in batch |tokenizer(batch_sentences, padding=True, truncation=True)or |
| | |tokenizer(batch_sentences, padding=True, truncation=STRATEGY)|
| | padding to max model input length |tokenizer(batch_sentences, padding='max_length', truncation=True)or |
| | |tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)|
| | padding to specific length | Not possible |
| truncation to specific length | no padding |tokenizer(batch_sentences, truncation=True, max_length=42)or |
| | |tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)|
| | padding to max sequence in batch |tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)or |
| | |tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)|
| | padding to max model input length | Not possible |
| | padding to specific length |tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)or |
| | |tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` | |
Preprocess
[[open-in-colab]]
Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, they need to be converted and assembled into batches of tensors. ๐ค Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you'll learn that for:
Text, use a Tokenizer to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors.
Speech and audio, use a Feature extractor to extract sequential features from audio waveforms and convert them into tensors.
Image inputs use a ImageProcessor to convert images into tensors.
Multimodal inputs, use a Processor to combine a tokenizer and a feature extractor or image processor.
AutoProcessor always works and automatically chooses the correct class for the model you're using, whether you're using a tokenizer, image processor, feature extractor or processor.
Before you begin, install ๐ค Datasets so you can load some datasets to experiment with:
pip install datasets
Natural Language Processing
The main tool for preprocessing textual data is a tokenizer. A tokenizer splits text into tokens according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer.
If you plan on using a pretrained model, it's important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the vocab) during pretraining.
Get started by loading a pretrained tokenizer with the [AutoTokenizer.from_pretrained] method. This downloads the vocab a model was pretrained with:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
Then pass your text to the tokenizer:
encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
The tokenizer returns a dictionary with three important items:
input_ids are the indices corresponding to each token in the sentence.
attention_mask indicates whether a token should be attended to or not.
token_type_ids identifies which sequence a token belongs to when there is more than one sequence.
Return your input by decoding the input_ids:
tokenizer.decode(encoded_input["input_ids"])
'[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]'
As you can see, the tokenizer added two special tokens - CLS and SEP (classifier and separator) - to the sentence. Not all models need
special tokens, but if they do, the tokenizer automatically adds them for you.
If there are several sentences you want to preprocess, pass them as a list to the tokenizer:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_inputs = tokenizer(batch_sentences)
print(encoded_inputs)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]]}
Pad
Sentences aren't always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special padding token to shorter sentences.
Set the padding parameter to True to pad the shorter sequences in the batch to match the longest sequence:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True)
print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
The first and third sentences are now padded with 0's because they are shorter.
Truncation
On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you'll need to truncate the sequence to a shorter length.
Set the truncation parameter to True to truncate a sequence to the maximum length accepted by the model:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
Check out the Padding and truncation concept guide to learn more different padding and truncation arguments.
Build tensors
Finally, you want the tokenizer to return the actual tensors that get fed to the model.
Set the return_tensors parameter to either pt for PyTorch, or tf for TensorFlow:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
print(encoded_input)
{'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])}
</pt>
<tf>py
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
print(encoded_input)
{'input_ids': ,
'token_type_ids': ,
'attention_mask': }
Audio
For audio tasks, you'll need a feature extractor to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors.
Load the MInDS-14 dataset (see the ๐ค Datasets tutorial for more details on how to load a dataset) to see how you can use a feature extractor with audio datasets:
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
Access the first element of the audio column to take a look at the input. Calling the audio column automatically loads and resamples the audio file:
dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, , -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
This returns three items:
array is the speech signal loaded - and potentially resampled - as a 1D array.
path points to the location of the audio file.
sampling_rate refers to how many data points in the speech signal are measured per second.
For this tutorial, you'll use the Wav2Vec2 model. Take a look at the model card, and you'll learn Wav2Vec2 is pretrained on 16kHz sampled speech audio. It is important your audio data's sampling rate matches the sampling rate of the dataset used to pretrain the model. If your data's sampling rate isn't the same, then you need to resample your data.
Use ๐ค Datasets' [~datasets.Dataset.cast_column] method to upsample the sampling rate to 16kHz:
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
Call the audio column again to resample the audio file:
dataset[0]["audio"]
{'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ,
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 16000}
Next, load a feature extractor to normalize and pad the input. When padding textual data, a 0 is added for shorter sequences. The same idea applies to audio data. The feature extractor adds a 0 - interpreted as silence - to array.
Load the feature extractor with [AutoFeatureExtractor.from_pretrained]:
from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
Pass the audio array to the feature extractor. We also recommend adding the sampling_rate argument in the feature extractor in order to better debug any silent errors that may occur.
audio_input = [dataset[0]["audio"]["array"]]
feature_extractor(audio_input, sampling_rate=16000)
{'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ,
5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]}
Just like the tokenizer, you can apply padding or truncation to handle variable sequences in a batch. Take a look at the sequence length of these two audio samples:
dataset[0]["audio"]["array"].shape
(173398,)
dataset[1]["audio"]["array"].shape
(106496,)
Create a function to preprocess the dataset so the audio samples are the same lengths. Specify a maximum sample length, and the feature extractor will either pad or truncate the sequences to match it:
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=16000,
padding=True,
max_length=100000,
truncation=True,
)
return inputs
Apply the preprocess_function to the the first few examples in the dataset:
processed_dataset = preprocess_function(dataset[:5])
The sample lengths are now the same and match the specified maximum length. You can pass your processed dataset to the model now!
processed_dataset["input_values"][0].shape
(100000,)
processed_dataset["input_values"][1].shape
(100000,)
Computer vision
For computer vision tasks, you'll need an image processor to prepare your dataset for the model.
Image preprocessing consists of several steps that convert images into the input expected by the model. These steps
include but are not limited to resizing, normalizing, color channel correction, and converting images to tensors.
Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation
transform image data, but they serve different purposes:
Image augmentation alters images in a way that can help prevent overfitting and increase the robustness of the model. You can get creative in how you augment your data - adjust brightness and colors, crop, rotate, resize, zoom, etc. However, be mindful not to change the meaning of the images with your augmentations.
Image preprocessing guarantees that the images match the modelโs expected input format. When fine-tuning a computer vision model, images must be preprocessed exactly as when the model was initially trained.
You can use any library you like for image augmentation. For image preprocessing, use the ImageProcessor associated with the model.
Load the food101 dataset (see the ๐ค Datasets tutorial for more details on how to load a dataset) to see how you can use an image processor with computer vision datasets:
Use ๐ค Datasets split parameter to only load a small sample from the training split since the dataset is quite large!
from datasets import load_dataset
dataset = load_dataset("food101", split="train[:100]")
Next, take a look at the image with ๐ค Datasets Image feature:
dataset[0]["image"]
Load the image processor with [AutoImageProcessor.from_pretrained]:
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
First, let's add some image augmentation. You can use any library you prefer, but in this tutorial, we'll use torchvision's transforms module. If you're interested in using another data augmentation library, learn how in the Albumentations or Kornia notebooks.
Here we use Compose to chain together a couple of
transforms - RandomResizedCrop and ColorJitter.
Note that for resizing, we can get the image size requirements from the image_processor. For some models, an exact height and
width are expected, for others only the shortest_edge is defined.
from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose
size = (
image_processor.size["shortest_edge"]
if "shortest_edge" in image_processor.size
else (image_processor.size["height"], image_processor.size["width"])
)
_transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)])
The model accepts pixel_values
as its input. ImageProcessor can take care of normalizing the images, and generating appropriate tensors.
Create a function that combines image augmentation and image preprocessing for a batch of images and generates pixel_values:
def transforms(examples):
images = [_transforms(img.convert("RGB")) for img in examples["image"]]
examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"]
return examples
In the example above we set do_resize=False because we have already resized the images in the image augmentation transformation,
and leveraged the size attribute from the appropriate image_processor. If you do not resize images during image augmentation,
leave this parameter out. By default, ImageProcessor will handle the resizing.
If you wish to normalize images as a part of the augmentation transformation, use the image_processor.image_mean,
and image_processor.image_std values.
Then use ๐ค Datasets set_transform to apply the transforms on the fly:
dataset.set_transform(transforms)
Now when you access the image, you'll notice the image processor has added pixel_values. You can pass your processed dataset to the model now!
dataset[0].keys()
Here is what the image looks like after the transforms are applied. The image has been randomly cropped and it's color properties are different.
import numpy as np
import matplotlib.pyplot as plt
img = dataset[0]["pixel_values"]
plt.imshow(img.permute(1, 2, 0))
For tasks like object detection, semantic segmentation, instance segmentation, and panoptic segmentation, ImageProcessor
offers post processing methods. These methods convert model's raw outputs into meaningful predictions such as bounding boxes,
or segmentation maps.
Pad
In some cases, for instance, when fine-tuning DETR, the model applies scale augmentation at training
time. This may cause images to be different sizes in a batch. You can use [DetrImageProcessor.pad]
from [DetrImageProcessor] and define a custom collate_fn to batch images together.
def collate_fn(batch):
pixel_values = [item["pixel_values"] for item in batch]
encoding = image_processor.pad(pixel_values, return_tensors="pt")
labels = [item["labels"] for item in batch]
batch = {}
batch["pixel_values"] = encoding["pixel_values"]
batch["pixel_mask"] = encoding["pixel_mask"]
batch["labels"] = labels
return batch
Multimodal
For tasks involving multimodal inputs, you'll need a processor to prepare your dataset for the model. A processor couples together two processing objects such as as tokenizer and feature extractor.
Load the LJ Speech dataset (see the ๐ค Datasets tutorial for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR):
from datasets import load_dataset
lj_speech = load_dataset("lj_speech", split="train")
For ASR, you're mainly focused on audio and text so you can remove the other columns:
lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"])
Now take a look at the audio and text columns:
lj_speech[0]["audio"]
{'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ,
7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav',
'sampling_rate': 22050}
lj_speech[0]["text"]
'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition'
Remember you should always resample your audio dataset's sampling rate to match the sampling rate of the dataset used to pretrain a model!
lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000))
Load a processor with [AutoProcessor.from_pretrained]:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
Create a function to process the audio data contained in array to input_values, and tokenize text to labels. These are the inputs to the model:
def prepare_dataset(example):
audio = example["audio"]
example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000))
return example
Apply the prepare_dataset function to a sample:
prepare_dataset(lj_speech[0])
The processor has now added input_values and labels, and the sampling rate has also been correctly downsampled to 16kHz. You can pass your processed dataset to the model now! |
Training on Specialized Hardware
Note: Most of the strategies introduced in the single GPU section (such as mixed precision training or gradient accumulation) and multi-GPU section are generic and apply to training models in general so make sure to have a look at it before diving into this section.
This document will be completed soon with information on how to train on specialized hardware. |
Feature Extractor
A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction
from sequences, e.g., pre-processing audio files to Log-Mel Spectrogram features, feature extraction from images
e.g. cropping image image files, but also padding, normalization, and conversion to Numpy, PyTorch, and TensorFlow
tensors.
FeatureExtractionMixin
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
- from_pretrained
- save_pretrained
SequenceFeatureExtractor
[[autodoc]] SequenceFeatureExtractor
- pad
BatchFeature
[[autodoc]] BatchFeature
ImageFeatureExtractionMixin
[[autodoc]] image_utils.ImageFeatureExtractionMixin |
FocalNet
Overview
The FocalNet model was proposed in Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
FocalNets completely replace self-attention (used in models like ViT and Swin) by a focal modulation mechanism for modeling token interactions in vision.
The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation.
The abstract from the paper is the following:
We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its
content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3.
Tips:
One can use the [AutoImageProcessor] class to prepare images for the model.
This model was contributed by nielsr.
The original code can be found here.
FocalNetConfig
[[autodoc]] FocalNetConfig
FocalNetModel
[[autodoc]] FocalNetModel
- forward
FocalNetForMaskedImageModeling
[[autodoc]] FocalNetForMaskedImageModeling
- forward
FocalNetForImageClassification
[[autodoc]] FocalNetForImageClassification
- forward |
RoFormer
Overview
The RoFormer model was proposed in RoFormer: Enhanced Transformer with Rotary Position Embedding by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
The abstract from the paper is the following:
Position encoding in transformer architecture provides supervision for dependency modeling between elements at
different positions in the sequence. We investigate various methods to encode positional information in
transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The
proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative
position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of
being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and
capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced
transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We
release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing
experiment for English benchmark will soon be updated.
Tips:
RoFormer is a BERT-like autoencoding model with rotary position embeddings. Rotary position embeddings have shown
improved performance on classification tasks with long texts.
This model was contributed by junnyu. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RoFormerConfig
[[autodoc]] RoFormerConfig
RoFormerTokenizer
[[autodoc]] RoFormerTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
RoFormerTokenizerFast
[[autodoc]] RoFormerTokenizerFast
- build_inputs_with_special_tokens
RoFormerModel
[[autodoc]] RoFormerModel
- forward
RoFormerForCausalLM
[[autodoc]] RoFormerForCausalLM
- forward
RoFormerForMaskedLM
[[autodoc]] RoFormerForMaskedLM
- forward
RoFormerForSequenceClassification
[[autodoc]] RoFormerForSequenceClassification
- forward
RoFormerForMultipleChoice
[[autodoc]] RoFormerForMultipleChoice
- forward
RoFormerForTokenClassification
[[autodoc]] RoFormerForTokenClassification
- forward
RoFormerForQuestionAnswering
[[autodoc]] RoFormerForQuestionAnswering
- forward
TFRoFormerModel
[[autodoc]] TFRoFormerModel
- call
TFRoFormerForMaskedLM
[[autodoc]] TFRoFormerForMaskedLM
- call
TFRoFormerForCausalLM
[[autodoc]] TFRoFormerForCausalLM
- call
TFRoFormerForSequenceClassification
[[autodoc]] TFRoFormerForSequenceClassification
- call
TFRoFormerForMultipleChoice
[[autodoc]] TFRoFormerForMultipleChoice
- call
TFRoFormerForTokenClassification
[[autodoc]] TFRoFormerForTokenClassification
- call
TFRoFormerForQuestionAnswering
[[autodoc]] TFRoFormerForQuestionAnswering
- call
FlaxRoFormerModel
[[autodoc]] FlaxRoFormerModel
- call
FlaxRoFormerForMaskedLM
[[autodoc]] FlaxRoFormerForMaskedLM
- call
FlaxRoFormerForSequenceClassification
[[autodoc]] FlaxRoFormerForSequenceClassification
- call
FlaxRoFormerForMultipleChoice
[[autodoc]] FlaxRoFormerForMultipleChoice
- call
FlaxRoFormerForTokenClassification
[[autodoc]] FlaxRoFormerForTokenClassification
- call
FlaxRoFormerForQuestionAnswering
[[autodoc]] FlaxRoFormerForQuestionAnswering
- call |
RoBERTa-PreLayerNorm
Overview
The RoBERTa-PreLayerNorm model was proposed in fairseq: A Fast, Extensible Toolkit for Sequence Modeling by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
It is identical to using the --encoder-normalize-before flag in fairseq.
The abstract from the paper is the following:
fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.
Tips:
The implementation is the same as Roberta except instead of using Add and Norm it does Norm and Add. Add and Norm refers to the Addition and LayerNormalization as described in Attention Is All You Need.
This is identical to using the --encoder-normalize-before flag in fairseq.
This model was contributed by andreasmaden.
The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RobertaPreLayerNormConfig
[[autodoc]] RobertaPreLayerNormConfig
RobertaPreLayerNormModel
[[autodoc]] RobertaPreLayerNormModel
- forward
RobertaPreLayerNormForCausalLM
[[autodoc]] RobertaPreLayerNormForCausalLM
- forward
RobertaPreLayerNormForMaskedLM
[[autodoc]] RobertaPreLayerNormForMaskedLM
- forward
RobertaPreLayerNormForSequenceClassification
[[autodoc]] RobertaPreLayerNormForSequenceClassification
- forward
RobertaPreLayerNormForMultipleChoice
[[autodoc]] RobertaPreLayerNormForMultipleChoice
- forward
RobertaPreLayerNormForTokenClassification
[[autodoc]] RobertaPreLayerNormForTokenClassification
- forward
RobertaPreLayerNormForQuestionAnswering
[[autodoc]] RobertaPreLayerNormForQuestionAnswering
- forward
TFRobertaPreLayerNormModel
[[autodoc]] TFRobertaPreLayerNormModel
- call
TFRobertaPreLayerNormForCausalLM
[[autodoc]] TFRobertaPreLayerNormForCausalLM
- call
TFRobertaPreLayerNormForMaskedLM
[[autodoc]] TFRobertaPreLayerNormForMaskedLM
- call
TFRobertaPreLayerNormForSequenceClassification
[[autodoc]] TFRobertaPreLayerNormForSequenceClassification
- call
TFRobertaPreLayerNormForMultipleChoice
[[autodoc]] TFRobertaPreLayerNormForMultipleChoice
- call
TFRobertaPreLayerNormForTokenClassification
[[autodoc]] TFRobertaPreLayerNormForTokenClassification
- call
TFRobertaPreLayerNormForQuestionAnswering
[[autodoc]] TFRobertaPreLayerNormForQuestionAnswering
- call
FlaxRobertaPreLayerNormModel
[[autodoc]] FlaxRobertaPreLayerNormModel
- call
FlaxRobertaPreLayerNormForCausalLM
[[autodoc]] FlaxRobertaPreLayerNormForCausalLM
- call
FlaxRobertaPreLayerNormForMaskedLM
[[autodoc]] FlaxRobertaPreLayerNormForMaskedLM
- call
FlaxRobertaPreLayerNormForSequenceClassification
[[autodoc]] FlaxRobertaPreLayerNormForSequenceClassification
- call
FlaxRobertaPreLayerNormForMultipleChoice
[[autodoc]] FlaxRobertaPreLayerNormForMultipleChoice
- call
FlaxRobertaPreLayerNormForTokenClassification
[[autodoc]] FlaxRobertaPreLayerNormForTokenClassification
- call
FlaxRobertaPreLayerNormForQuestionAnswering
[[autodoc]] FlaxRobertaPreLayerNormForQuestionAnswering
- call |
SpeechT5
Overview
The SpeechT5 model was proposed in SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
The abstract from the paper is the following:
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
This model was contributed by Matthijs. The original code can be found here.
SpeechT5Config
[[autodoc]] SpeechT5Config
SpeechT5HifiGanConfig
[[autodoc]] SpeechT5HifiGanConfig
SpeechT5Tokenizer
[[autodoc]] SpeechT5Tokenizer
- call
- save_vocabulary
- decode
- batch_decode
SpeechT5FeatureExtractor
[[autodoc]] SpeechT5FeatureExtractor
- call
SpeechT5Processor
[[autodoc]] SpeechT5Processor
- call
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
SpeechT5Model
[[autodoc]] SpeechT5Model
- forward
SpeechT5ForSpeechToText
[[autodoc]] SpeechT5ForSpeechToText
- forward
SpeechT5ForTextToSpeech
[[autodoc]] SpeechT5ForTextToSpeech
- forward
- generate_speech
SpeechT5ForSpeechToSpeech
[[autodoc]] SpeechT5ForSpeechToSpeech
- forward
- generate_speech
SpeechT5HifiGan
[[autodoc]] SpeechT5HifiGan
- forward |
BertGeneration
Overview
The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using
[EncoderDecoderModel] as proposed in Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
The abstract from the paper is the following:
Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By
warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple
benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language
Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We
developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT,
GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both
encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation,
Text Summarization, Sentence Splitting, and Sentence Fusion.
Usage:
The model can be used in combination with the [EncoderDecoderModel] to leverage two pretrained
BERT checkpoints for subsequent fine-tuning.
thon
leverage checkpoints for Bert2Bert model
use BERT's cls token as BOS token and sep token as EOS token
encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102)
add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
decoder = BertGenerationDecoder.from_pretrained(
"bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102
)
bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
create tokenizer
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
input_ids = tokenizer(
"This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
).input_ids
labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
train
loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward()
Pretrained [EncoderDecoderModel] are also directly available in the model hub, e.g.,
thon
instantiate sentence fusion model
sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
input_ids = tokenizer(
"This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt"
).input_ids
outputs = sentence_fuser.generate(input_ids)
print(tokenizer.decode(outputs[0]))
Tips:
[BertGenerationEncoder] and [BertGenerationDecoder] should be used in
combination with [EncoderDecoder].
For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input.
Therefore, no EOS token should be added to the end of the input.
This model was contributed by patrickvonplaten. The original code can be
found here.
BertGenerationConfig
[[autodoc]] BertGenerationConfig
BertGenerationTokenizer
[[autodoc]] BertGenerationTokenizer
- save_vocabulary
BertGenerationEncoder
[[autodoc]] BertGenerationEncoder
- forward
BertGenerationDecoder
[[autodoc]] BertGenerationDecoder
- forward |
VisualBERT
Overview
The VisualBERT model was proposed in VisualBERT: A Simple and Performant Baseline for Vision and Language by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
VisualBERT is a neural network trained on a variety of (image, text) pairs.
The abstract from the paper is the following:
We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks.
VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an
associated input image with self-attention. We further propose two visually-grounded language model objectives for
pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2,
and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly
simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any
explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between
verbs and image regions corresponding to their arguments.
Tips:
Most of the checkpoints provided work with the [VisualBertForPreTraining] configuration. Other
checkpoints provided are the fine-tuned checkpoints for down-stream tasks - VQA ('visualbert-vqa'), VCR
('visualbert-vcr'), NLVR2 ('visualbert-nlvr2'). Hence, if you are not working on these downstream tasks, it is
recommended that you use the pretrained checkpoints.
For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints.
We do not provide the detector and its weights as a part of the package, but it will be available in the research
projects, and the states can be loaded directly into the detector provided.
Usage
VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice,
visual reasoning and region-to-phrase correspondence tasks. VisualBERT uses a BERT-like transformer to prepare
embeddings for image-text pairs. Both the text and visual features are then projected to a latent space with identical
dimension.
To feed images to the model, each image is passed through a pre-trained object detector and the regions and the
bounding boxes are extracted. The authors use the features generated after passing these regions through a pre-trained
CNN like ResNet as visual embeddings. They also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding
layer, and is expected to be bound by [CLS] and a [SEP] tokens, as in BERT. The segment IDs must also be set
appropriately for the textual and visual parts.
The [BertTokenizer] is used to encode the text. A custom detector/image processor must be used
to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models:
VisualBERT VQA demo notebook : This notebook
contains an example on VisualBERT VQA.
Generate Embeddings for VisualBERT (Colab Notebook) : This notebook contains
an example on how to generate visual embeddings.
The following example shows how to get the last hidden state using [VisualBertModel]:
thon
import torch
from transformers import BertTokenizer, VisualBertModel
model = VisualBertModel.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
inputs = tokenizer("What is the man eating?", return_tensors="pt")
this is a custom function that returns the visual embeddings given the image path
visual_embeds = get_visual_embeddings(image_path)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update(
{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask,
}
)
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
This model was contributed by gchhablani. The original code can be found here.
VisualBertConfig
[[autodoc]] VisualBertConfig
VisualBertModel
[[autodoc]] VisualBertModel
- forward
VisualBertForPreTraining
[[autodoc]] VisualBertForPreTraining
- forward
VisualBertForQuestionAnswering
[[autodoc]] VisualBertForQuestionAnswering
- forward
VisualBertForMultipleChoice
[[autodoc]] VisualBertForMultipleChoice
- forward
VisualBertForVisualReasoning
[[autodoc]] VisualBertForVisualReasoning
- forward
VisualBertForRegionToPhraseAlignment
[[autodoc]] VisualBertForRegionToPhraseAlignment
- forward |
VisionTextDualEncoder
Overview
The [VisionTextDualEncoderModel] can be used to initialize a vision-text dual encoder model with
any pretrained vision autoencoding model as the vision encoder (e.g. ViT, BEiT, DeiT) and any pretrained text autoencoding model as the text encoder (e.g. RoBERTa, BERT). Two projection layers are added on top of both the vision and text encoder to project the output embeddings
to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a
downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text
training and then can be used for zero-shot vision tasks such image-classification or retrieval.
In LiT: Zero-Shot Transfer with Locked-image Text Tuning it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on
new zero-shot vision tasks such as image classification or retrieval.
VisionTextDualEncoderConfig
[[autodoc]] VisionTextDualEncoderConfig
VisionTextDualEncoderProcessor
[[autodoc]] VisionTextDualEncoderProcessor
VisionTextDualEncoderModel
[[autodoc]] VisionTextDualEncoderModel
- forward
FlaxVisionTextDualEncoderModel
[[autodoc]] FlaxVisionTextDualEncoderModel
- call
TFVisionTextDualEncoderModel
[[autodoc]] TFVisionTextDualEncoderModel
- call |
MegatronGPT2
Overview
The MegatronGPT2 model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).
Tips:
We have provided pretrained GPT2-345M checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the NGC documentation.
Alternatively, you can directly download the checkpoints using:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O
megatron_gpt2_345m_v0_0.zip
Once you have obtained the checkpoint from NVIDIA GPU Cloud (NGC), you have to convert it to a format that will easily
be loaded by Hugging Face Transformers GPT2 implementation.
The following command allows you to do the conversion. We assume that the folder models/megatron_gpt2 contains
megatron_gpt2_345m_v0_0.zip and that the command is run from that folder:
python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip
This model was contributed by jdemouth. The original code can be found here. That repository contains a multi-GPU and multi-node implementation of the
Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and
"pipeline parallel" techniques. |
LongT5
Overview
The LongT5 model was proposed in LongT5: Efficient Text-To-Text Transformer for Long Sequences
by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an
encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of
T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2)
Transient-Global attention.
The abstract from the paper is the following:
Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the
performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we
explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated
attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training
(PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global}
(TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are
able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on
question answering tasks.
Tips:
[LongT5ForConditionalGeneration] is an extension of [T5ForConditionalGeneration] exchanging the traditional
encoder self-attention layer with efficient either local attention or transient-global (tglobal) attention.
Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective
inspired by the pre-training of [PegasusForConditionalGeneration].
LongT5 model is designed to work efficiently and very well on long-range sequence-to-sequence tasks where the
input sequence exceeds commonly used 512 tokens. It is capable of handling input sequences of a length up to 16,384 tokens.
For Local Attention, the sparse sliding-window local attention operation allows a given token to attend only r
tokens to the left and right of it (with r=127 by default). Local Attention does not introduce any new parameters
to the model. The complexity of the mechanism is linear in input sequence length l: O(l*r).
Transient Global Attention is an extension of the Local Attention. It, furthermore, allows each input token to
interact with all other tokens in the layer. This is achieved via splitting an input sequence into blocks of a fixed
length k (with a default k=16). Then, a global token for such a block is obtained via summing and normalizing the embeddings of every token
in the block. Thanks to this, the attention allows each token to attend to both nearby tokens like in Local attention, and
also every global token like in the case of standard global attention (transient represents the fact the global tokens
are constructed dynamically within each attention operation). As a consequence, TGlobal attention introduces
a few new parameters -- global relative position biases and a layer normalization for global token's embedding.
The complexity of this mechanism is O(l(r + l/k)).
An example showing how to evaluate a fine-tuned LongT5 model on the pubmed dataset is below.
thon
import evaluate
from datasets import load_dataset
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
dataset = load_dataset("scientific_papers", "pubmed", split="validation")
model = (
LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
.to("cuda")
.half()
)
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
def generate_answers(batch):
inputs_dict = tokenizer(
batch["article"], max_length=16384, padding="max_length", truncation=True, return_tensors="pt"
)
input_ids = inputs_dict.input_ids.to("cuda")
attention_mask = inputs_dict.attention_mask.to("cuda")
output_ids = model.generate(input_ids, attention_mask=attention_mask, max_length=512, num_beams=2)
batch["predicted_abstract"] = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
return batch
result = dataset.map(generate_answer, batched=True, batch_size=2)
rouge = evaluate.load("rouge")
rouge.compute(predictions=result["predicted_abstract"], references=result["abstract"])
This model was contributed by stancld.
The original code can be found here.
Documentation resources
Translation task guide
Summarization task guide
LongT5Config
[[autodoc]] LongT5Config
LongT5Model
[[autodoc]] LongT5Model
- forward
LongT5ForConditionalGeneration
[[autodoc]] LongT5ForConditionalGeneration
- forward
LongT5EncoderModel
[[autodoc]] LongT5EncoderModel
- forward
FlaxLongT5Model
[[autodoc]] FlaxLongT5Model
- call
- encode
- decode
FlaxLongT5ForConditionalGeneration
[[autodoc]] FlaxLongT5ForConditionalGeneration
- call
- encode
- decode |
CTRL
Overview
CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong and
Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus
of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.).
The abstract from the paper is the following:
Large-scale language models show promising text generation capabilities, but users cannot easily control particular
aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model,
trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were
derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while
providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the
training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data
via model-based source attribution.
Tips:
CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences
or links to generate coherent text. Refer to the original implementation for
more information.
CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be
observed in the run_generation.py example script.
The PyTorch models can take the past_key_values as input, which is the previously computed key/value attention pairs.
TensorFlow models accepts past as input. Using the past_key_values value prevents the model from re-computing
pre-computed values in the context of text generation. See the forward
method for more information on the usage of this argument.
This model was contributed by keskarnitishr. The original code can be found
here.
Documentation resources
Text classification task guide
Causal language modeling task guide
CTRLConfig
[[autodoc]] CTRLConfig
CTRLTokenizer
[[autodoc]] CTRLTokenizer
- save_vocabulary
CTRLModel
[[autodoc]] CTRLModel
- forward
CTRLLMHeadModel
[[autodoc]] CTRLLMHeadModel
- forward
CTRLForSequenceClassification
[[autodoc]] CTRLForSequenceClassification
- forward
TFCTRLModel
[[autodoc]] TFCTRLModel
- call
TFCTRLLMHeadModel
[[autodoc]] TFCTRLLMHeadModel
- call
TFCTRLForSequenceClassification
[[autodoc]] TFCTRLForSequenceClassification
- call |
MBart and MBart-50
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview of MBart
The MBart model was presented in Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan
Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual
corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete
sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only
on the encoder, decoder, or reconstructing parts of the text.
This model was contributed by valhalla. The Authors' code can be found here
Training of MBart
MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The
target text format is [tgt_lang_code] X [eos]. bos is never used.
The regular [~MBartTokenizer.__call__] will encode source text format passed as first argument or with the text
keyword, and target text format passed with the text_label keyword argument.
Supervised training
thon
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "ลeful ONU declarฤ cฤ nu existฤ o soluลฃie militarฤ รฎn Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
forward pass
model(**inputs)
Generation
While generating the target text set the decoder_start_token_id to the target language id. The following
example shows how to translate English to Romanian using the facebook/mbart-large-en-ro model.
thon
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX")
article = "UN Chief Says There Is No Military Solution in Syria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"ลeful ONU declarฤ cฤ nu existฤ o soluลฃie militarฤ รฎn Siria"
Overview of MBart-50
MBart-50 was introduced in the Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav
Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original mbart-large-cc25 checkpoint by extendeding
its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50
languages.
According to the abstract
Multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one
direction, a pretrained model is finetuned on many directions at the same time. It demonstrates that pretrained models
can be extended to incorporate additional languages without loss of performance. Multilingual finetuning improves on
average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while
improving 9.3 BLEU on average over bilingual baselines from scratch.
Training of MBart-50
The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix
for both source and target text i.e the text format is [lang_code] X [eos], where lang_code is source
language id for source text and target language id for target text, with X being the source or target text
respectively.
MBart-50 has its own tokenizer [MBart50Tokenizer].
Supervised training
thon
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "ลeful ONU declarฤ cฤ nu existฤ o soluลฃie militarฤ รฎn Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
model(**model_inputs) # forward pass
Generation
To generate using the mBART-50 multilingual translation models, eos_token_id is used as the
decoder_start_token_id and the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the forced_bos_token_id parameter to the generate method.
The following example shows how to translate between Hindi to French and Arabic to English using the
facebook/mbart-50-large-many-to-many checkpoint.
thon
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_hi = "เคธเคเคฏเฅเคเฅเคค เคฐเคพเคทเฅเคเฅเคฐ เคเฅ เคชเฅเคฐเคฎเฅเค เคเคพ เคเคนเคจเคพ เคนเฅ เคเคฟ เคธเฅเคฐเคฟเคฏเคพ เคฎเฅเค เคเฅเค เคธเฅเคจเฅเคฏ เคธเคฎเคพเคงเคพเคจ เคจเคนเฅเค เคนเฅ"
article_ar = "ุงูุฃู
ูู ุงูุนุงู
ููุฃู
ู
ุงูู
ุชุญุฏุฉ ูููู ุฅูู ูุง ููุฌุฏ ุญู ุนุณูุฑู ูู ุณูุฑูุง."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
=> "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria."
translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
=> "The Secretary-General of the United Nations says there is no military solution in Syria."
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide
MBartConfig
[[autodoc]] MBartConfig
MBartTokenizer
[[autodoc]] MBartTokenizer
- build_inputs_with_special_tokens
MBartTokenizerFast
[[autodoc]] MBartTokenizerFast
MBart50Tokenizer
[[autodoc]] MBart50Tokenizer
MBart50TokenizerFast
[[autodoc]] MBart50TokenizerFast
MBartModel
[[autodoc]] MBartModel
MBartForConditionalGeneration
[[autodoc]] MBartForConditionalGeneration
MBartForQuestionAnswering
[[autodoc]] MBartForQuestionAnswering
MBartForSequenceClassification
[[autodoc]] MBartForSequenceClassification
MBartForCausalLM
[[autodoc]] MBartForCausalLM
- forward
TFMBartModel
[[autodoc]] TFMBartModel
- call
TFMBartForConditionalGeneration
[[autodoc]] TFMBartForConditionalGeneration
- call
FlaxMBartModel
[[autodoc]] FlaxMBartModel
- call
- encode
- decode
FlaxMBartForConditionalGeneration
[[autodoc]] FlaxMBartForConditionalGeneration
- call
- encode
- decode
FlaxMBartForSequenceClassification
[[autodoc]] FlaxMBartForSequenceClassification
- call
- encode
- decode
FlaxMBartForQuestionAnswering
[[autodoc]] FlaxMBartForQuestionAnswering
- call
- encode
- decode |
BLOOM
Overview
The BLOOM model has been proposed with its various versions through the BigScience Workshop. BigScience is inspired by other open science initiatives where researchers have pooled their time and resources to collectively achieve a higher impact.
The architecture of BLOOM is essentially similar to GPT3 (auto-regressive model for next token prediction), but has been trained on 46 different languages and 13 programming languages.
Several smaller versions of the models have been trained on the same dataset. BLOOM is available in the following versions:
bloom-560m
bloom-1b1
bloom-1b7
bloom-3b
bloom-7b1
bloom (176B parameters)
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with BLOOM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
[BloomForCausalLM] is supported by this causal language modeling example script and notebook.
See also:
- Causal language modeling task guide
- Text classification task guide
- Token classification task guide
- Question answering task guide
โก๏ธ Inference
- A blog on Optimization story: Bloom inference.
- A blog on Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate.
โ๏ธ Training
- A blog on The Technology Behind BLOOM Training.
BloomConfig
[[autodoc]] BloomConfig
- all
BloomModel
[[autodoc]] BloomModel
- forward
BloomTokenizerFast
[[autodoc]] BloomTokenizerFast
- all
BloomForCausalLM
[[autodoc]] BloomForCausalLM
- forward
BloomForSequenceClassification
[[autodoc]] BloomForSequenceClassification
- forward
BloomForTokenClassification
[[autodoc]] BloomForTokenClassification
- forward
BloomForQuestionAnswering
[[autodoc]] BloomForQuestionAnswering
- forward |
Llama2
Overview
The Llama2 model was proposed in LLaMA: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. It is a collection of foundation language models ranging from 7B to 70B parameters, with checkpoints finetuned for chat application!
The abstract from the paper is the following:
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
Checkout all Llama2 models here
Tips:
Weights for the Llama2 models can be obtained by filling out this form
The architecture is very similar to the first Llama, with the addition of Groupe Query Attention (GQA) following this paper
Setting config.pretraining_tp to a value different than 1 will activate the more accurate but slower computation of the linear layers, which should better match the original logits.
The original model uses pad_id = -1 which means that there is no padding token. We can't have the same logic, make sure to add a padding token using tokenizer.add_special_tokens({"pad_token":"<pad>"}) and resize the token embedding accordingly. You should also set the model.config.pad_token_id. The embed_tokens layer of the model is initialized with self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx), which makes sure that encoding the padding token will output zeros, so passing it when initializing is recommended.
After filling out the form and gaining access to the model checkpoints, you should be able to use the already converted checkpoints. Otherwise, if you are converting your own model, feel free to use the conversion script. The script can be called with the following (example) command:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 75B model, it's thus 145GB of RAM needed.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
This model was contributed by Arthur Zucker with contributions from Lysandre Debut. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
LlamaConfig
[[autodoc]] LlamaConfig
LlamaTokenizer
[[autodoc]] LlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LlamaTokenizerFast
[[autodoc]] LlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
LlamaModel
[[autodoc]] LlamaModel
- forward
LlamaForCausalLM
[[autodoc]] LlamaForCausalLM
- forward
LlamaForSequenceClassification
[[autodoc]] LlamaForSequenceClassification
- forward |
OWL-ViT
Overview
The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in Simple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text.
The abstract from the paper is the following:
Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.
Usage
OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
[OwlViTImageProcessor] can be used to resize (or rescale) and normalize images for the model and [CLIPTokenizer] is used to encode the text. [OwlViTProcessor] wraps [OwlViTImageProcessor] and [CLIPTokenizer] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [OwlViTProcessor] and [OwlViTForObjectDetection].
thon
import requests
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29]
Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17]
This model was contributed by adirik. The original code can be found here.
OwlViTConfig
[[autodoc]] OwlViTConfig
- from_text_vision_configs
OwlViTTextConfig
[[autodoc]] OwlViTTextConfig
OwlViTVisionConfig
[[autodoc]] OwlViTVisionConfig
OwlViTImageProcessor
[[autodoc]] OwlViTImageProcessor
- preprocess
- post_process_object_detection
- post_process_image_guided_detection
OwlViTFeatureExtractor
[[autodoc]] OwlViTFeatureExtractor
- call
- post_process
- post_process_image_guided_detection
OwlViTProcessor
[[autodoc]] OwlViTProcessor
OwlViTModel
[[autodoc]] OwlViTModel
- forward
- get_text_features
- get_image_features
OwlViTTextModel
[[autodoc]] OwlViTTextModel
- forward
OwlViTVisionModel
[[autodoc]] OwlViTVisionModel
- forward
OwlViTForObjectDetection
[[autodoc]] OwlViTForObjectDetection
- forward
- image_guided_detection |
MarkupLM
Overview
The MarkupLM model was proposed in MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document
Understanding by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. MarkupLM is BERT, but
applied to HTML pages instead of raw text documents. The model incorporates additional embedding layers to improve
performance, similar to LayoutLM.
The model can be used for tasks like question answering on web pages or information extraction from web pages. It obtains
state-of-the-art results on 2 important benchmarks:
- WebSRC, a dataset for Web-Based Structural Reading Comprehension (a bit like SQuAD but for web pages)
- SWDE, a dataset
for information extraction from web pages (basically named-entity recogntion on web pages)
The abstract from the paper is the following:
Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document
Understanding (VrDU), especially the fixed-layout documents such as scanned document images. While, there are still a
large number of digital documents where the layout information is not fixed and needs to be interactively and
dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this
paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone such as
HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the
pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding
tasks. The pre-trained model and code will be publicly available.
Tips:
- In addition to input_ids, [~MarkupLMModel.forward] expects 2 additional inputs, namely xpath_tags_seq and xpath_subs_seq.
These are the XPATH tags and subscripts respectively for each token in the input sequence.
- One can use [MarkupLMProcessor] to prepare all data for the model. Refer to the usage guide for more info.
- Demo notebooks can be found here.
MarkupLM architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Usage: MarkupLMProcessor
The easiest way to prepare data for the model is to use [MarkupLMProcessor], which internally combines a feature extractor
([MarkupLMFeatureExtractor]) and a tokenizer ([MarkupLMTokenizer] or [MarkupLMTokenizerFast]). The feature extractor is
used to extract all nodes and xpaths from the HTML strings, which are then provided to the tokenizer, which turns them into the
token-level inputs of the model (input_ids etc.). Note that you can still use the feature extractor and tokenizer separately,
if you only want to handle one of the two tasks.
thon
from transformers import MarkupLMFeatureExtractor, MarkupLMTokenizerFast, MarkupLMProcessor
feature_extractor = MarkupLMFeatureExtractor()
tokenizer = MarkupLMTokenizerFast.from_pretrained("microsoft/markuplm-base")
processor = MarkupLMProcessor(feature_extractor, tokenizer)
In short, one can provide HTML strings (and possibly additional data) to [MarkupLMProcessor],
and it will create the inputs expected by the model. Internally, the processor first uses
[MarkupLMFeatureExtractor] to get a list of nodes and corresponding xpaths. The nodes and
xpaths are then provided to [MarkupLMTokenizer] or [MarkupLMTokenizerFast], which converts them
to token-level input_ids, attention_mask, token_type_ids, xpath_subs_seq, xpath_tags_seq.
Optionally, one can provide node labels to the processor, which are turned into token-level labels.
[MarkupLMFeatureExtractor] uses Beautiful Soup, a Python library for
pulling data out of HTML and XML files, under the hood. Note that you can still use your own parsing solution of
choice, and provide the nodes and xpaths yourself to [MarkupLMTokenizer] or [MarkupLMTokenizerFast].
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True
This is the simplest case, in which the processor will use the feature extractor to get all nodes and xpaths from the HTML.
thon
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
html_string = """
<!DOCTYPE html>
Hello world
Welcome
Here is my website.
"""
note that you can also add provide all tokenizer parameters here such as padding, truncation
encoding = processor(html_string, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False
In case one already has obtained all nodes and xpaths, one doesn't need the feature extractor. In that case, one should
provide the nodes and corresponding xpaths themselves to the processor, and make sure to set parse_html to False.
thon
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
encoding = processor(nodes=nodes, xpaths=xpaths, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Use case 3: token classification (training), parse_html=False
For token classification tasks (such as SWDE), one can also provide the
corresponding node labels in order to train a model. The processor will then convert these into token-level labels.
By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
ignore_index of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with only_label_first_subword set to False.
thon
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
node_labels = [1, 2, 2, 1]
encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq', 'labels'])
Use case 4: web page question answering (inference), parse_html=True
For question answering tasks on web pages, you can provide a question to the processor. By default, the
processor will use the feature extractor to get all nodes and xpaths, and create [CLS] question tokens [SEP] word tokens [SEP].
thon
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
html_string = """
<!DOCTYPE html>
Hello world
Welcome
My name is Niels.
"""
question = "What's his name?"
encoding = processor(html_string, questions=question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Use case 5: web page question answering (inference), parse_html=False
For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted
all nodes and xpaths yourself, you can provide them directly to the processor. Make sure to set parse_html to False.
thon
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
question = "What's his name?"
encoding = processor(nodes=nodes, xpaths=xpaths, questions=question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
MarkupLMConfig
[[autodoc]] MarkupLMConfig
- all
MarkupLMFeatureExtractor
[[autodoc]] MarkupLMFeatureExtractor
- call
MarkupLMTokenizer
[[autodoc]] MarkupLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
MarkupLMTokenizerFast
[[autodoc]] MarkupLMTokenizerFast
- all
MarkupLMProcessor
[[autodoc]] MarkupLMProcessor
- call
MarkupLMModel
[[autodoc]] MarkupLMModel
- forward
MarkupLMForSequenceClassification
[[autodoc]] MarkupLMForSequenceClassification
- forward
MarkupLMForTokenClassification
[[autodoc]] MarkupLMForTokenClassification
- forward
MarkupLMForQuestionAnswering
[[autodoc]] MarkupLMForQuestionAnswering
- forward |
ERNIE
Overview
ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks,
including ERNIE1.0, ERNIE2.0,
ERNIE3.0, ERNIE-Gram, ERNIE-health, etc.
These models are contributed by nghuyong and the official code can be found in PaddleNLP (in PaddlePaddle).
How to use
Take ernie-1.0-base-zh as an example:
Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = AutoModel.from_pretrained("nghuyong/ernie-1.0-base-zh")
Supported Models
| Model Name | Language | Description |
|:-------------------:|:--------:|:-------------------------------:|
| ernie-1.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
| ernie-2.0-base-en | English | Layer:12, Heads:12, Hidden:768 |
| ernie-2.0-large-en | English | Layer:24, Heads:16, Hidden:1024 |
| ernie-3.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
| ernie-3.0-medium-zh | Chinese | Layer:6, Heads:12, Hidden:768 |
| ernie-3.0-mini-zh | Chinese | Layer:6, Heads:12, Hidden:384 |
| ernie-3.0-micro-zh | Chinese | Layer:4, Heads:12, Hidden:384 |
| ernie-3.0-nano-zh | Chinese | Layer:4, Heads:12, Hidden:312 |
| ernie-health-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
| ernie-gram-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
You can find all the supported models from huggingface's model hub: huggingface.co/nghuyong, and model details from paddle's official
repo: PaddleNLP
and ERNIE.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
ErnieConfig
[[autodoc]] ErnieConfig
- all
Ernie specific outputs
[[autodoc]] models.ernie.modeling_ernie.ErnieForPreTrainingOutput
ErnieModel
[[autodoc]] ErnieModel
- forward
ErnieForPreTraining
[[autodoc]] ErnieForPreTraining
- forward
ErnieForCausalLM
[[autodoc]] ErnieForCausalLM
- forward
ErnieForMaskedLM
[[autodoc]] ErnieForMaskedLM
- forward
ErnieForNextSentencePrediction
[[autodoc]] ErnieForNextSentencePrediction
- forward
ErnieForSequenceClassification
[[autodoc]] ErnieForSequenceClassification
- forward
ErnieForMultipleChoice
[[autodoc]] ErnieForMultipleChoice
- forward
ErnieForTokenClassification
[[autodoc]] ErnieForTokenClassification
- forward
ErnieForQuestionAnswering
[[autodoc]] ErnieForQuestionAnswering
- forward |
ConvNeXt V2
Overview
The ConvNeXt V2 model was proposed in ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of ConvNeXT.
The abstract from the paper is the following:
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
Tips:
See the code examples below each model regarding usage.
ConvNeXt V2 architecture. Taken from the original paper.
This model was contributed by adirik. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with ConvNeXt V2.
[ConvNextV2ForImageClassification] is supported by this example script and notebook.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextV2Config
[[autodoc]] ConvNextV2Config
ConvNextV2Model
[[autodoc]] ConvNextV2Model
- forward
ConvNextV2ForImageClassification
[[autodoc]] ConvNextV2ForImageClassification
- forward |
SwitchTransformers
Overview
The SwitchTransformers model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, Noam Shazeer.
The Switch Transformer model uses a sparse T5 encoder-decoder architecture, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale.
During a forward pass, only a fraction of the weights are used. The routing mechanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations.
The abstract from the paper is the following:
In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.
Tips:
SwitchTransformers uses the [T5Tokenizer], which can be loaded directly from each model's repository.
The released weights are pretrained on English Masked Language Modeling task, and should be finetuned.
This model was contributed by Younes Belkada and Arthur Zucker .
The original code can be found here.
Resources
Translation task guide
Summarization task guide
SwitchTransformersConfig
[[autodoc]] SwitchTransformersConfig
SwitchTransformersTop1Router
[[autodoc]] SwitchTransformersTop1Router
- _compute_router_probabilities
- forward
SwitchTransformersSparseMLP
[[autodoc]] SwitchTransformersSparseMLP
- forward
SwitchTransformersModel
[[autodoc]] SwitchTransformersModel
- forward
SwitchTransformersForConditionalGeneration
[[autodoc]] SwitchTransformersForConditionalGeneration
- forward
SwitchTransformersEncoderModel
[[autodoc]] SwitchTransformersEncoderModel
- forward |
MEGA
Overview
The MEGA model was proposed in Mega: Moving Average Equipped Gated Attention by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism
stronger positional biases. This allows MEGA to perform competitively to Transformers on standard benchmarks including LRA
while also having significantly fewer parameters. MEGA's compute efficiency allows it to scale to very long sequences, making it an
attractive option for long-document NLP tasks.
The abstract from the paper is the following:
*The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models. *
Tips:
MEGA can perform quite well with relatively few parameters. See Appendix D in the MEGA paper for examples of architectural specs which perform well in various settings. If using MEGA as a decoder, be sure to set bidirectional=False to avoid errors with default bidirectional.
Mega-chunk is a variant of mega that reduces time and spaces complexity from quadratic to linear. Utilize chunking with MegaConfig.use_chunking and control chunk size with MegaConfig.chunk_size
This model was contributed by mnaylor.
The original code can be found here.
Implementation Notes:
The original implementation of MEGA had an inconsistent expectation of attention masks for padding and causal self-attention between the softmax attention and Laplace/squared ReLU method. This implementation addresses that inconsistency.
The original implementation did not include token type embeddings; this implementation adds support for these, with the option controlled by MegaConfig.add_token_type_embeddings
MegaConfig
[[autodoc]] MegaConfig
MegaModel
[[autodoc]] MegaModel
- forward
MegaForCausalLM
[[autodoc]] MegaForCausalLM
- forward
MegaForMaskedLM
[[autodoc]] MegaForMaskedLM
- forward
MegaForSequenceClassification
[[autodoc]] MegaForSequenceClassification
- forward
MegaForMultipleChoice
[[autodoc]] MegaForMultipleChoice
- forward
MegaForTokenClassification
[[autodoc]] MegaForTokenClassification
- forward
MegaForQuestionAnswering
[[autodoc]] MegaForQuestionAnswering
- forward |
REALM
Overview
The REALM model was proposed in REALM: Retrieval-Augmented Language Model Pre-Training by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It's a
retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then
utilizes retrieved documents to process question answering tasks.
The abstract from the paper is the following:
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks
such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network,
requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we
augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend
over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the
first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language
modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We
demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the
challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both
explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous
methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as
interpretability and modularity.
This model was contributed by qqaatw. The original code can be found
here.
RealmConfig
[[autodoc]] RealmConfig
RealmTokenizer
[[autodoc]] RealmTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
- batch_encode_candidates
RealmTokenizerFast
[[autodoc]] RealmTokenizerFast
- batch_encode_candidates
RealmRetriever
[[autodoc]] RealmRetriever
RealmEmbedder
[[autodoc]] RealmEmbedder
- forward
RealmScorer
[[autodoc]] RealmScorer
- forward
RealmKnowledgeAugEncoder
[[autodoc]] RealmKnowledgeAugEncoder
- forward
RealmReader
[[autodoc]] RealmReader
- forward
RealmForOpenQA
[[autodoc]] RealmForOpenQA
- block_embedding_to
- forward |
XGLM
Overview
The XGLM model was proposed in Few-shot Learning with Multilingual Language Models
by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal,
Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo,
Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
The abstract from the paper is the following:
Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language
tasks without fine-tuning. While these models are known to be able to jointly represent many different languages,
their training data is dominated by English, potentially limiting their cross-lingual generalization.
In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages,
and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters
sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size
in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings)
and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark,
our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the
official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails,
showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement
on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models
in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.
This model was contributed by Suraj. The original code can be found here.
Documentation resources
Causal language modeling task guide
XGLMConfig
[[autodoc]] XGLMConfig
XGLMTokenizer
[[autodoc]] XGLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XGLMTokenizerFast
[[autodoc]] XGLMTokenizerFast
XGLMModel
[[autodoc]] XGLMModel
- forward
XGLMForCausalLM
[[autodoc]] XGLMForCausalLM
- forward
TFXGLMModel
[[autodoc]] TFXGLMModel
- call
TFXGLMForCausalLM
[[autodoc]] TFXGLMForCausalLM
- call
FlaxXGLMModel
[[autodoc]] FlaxXGLMModel
- call
FlaxXGLMForCausalLM
[[autodoc]] FlaxXGLMForCausalLM
- call |
Splinter
Overview
The Splinter model was proposed in Few-Shot Question Answering by Pretraining Span Selection by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter
is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus
comprising Wikipedia and the Toronto Book Corpus.
The abstract from the paper is the following:
In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order
of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred
training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between
current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question
answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all
recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans
are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select
the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD
with only 128 training examples), while maintaining competitive performance in the high-resource setting.
Tips:
Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize
to question representations which are used to predict the answers. This layer is called QASS, and is the default
behaviour in the [SplinterForQuestionAnswering] class. Therefore:
Use [SplinterTokenizer] (rather than [BertTokenizer]), as it already
contains this special token. Also, its default behavior is to use this token when two sequences are given (for
example, in the run_qa.py script).
If you plan on using Splinter outside run_qa.py, please keep in mind the question token - it might be important for
the success of your model, especially in a few-shot setting.
Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that
one also has the pretrained weights of the QASS layer (tau/splinter-base-qass and tau/splinter-large-qass) and one
doesn't (tau/splinter-base and tau/splinter-large). This is done to support randomly initializing this layer at
fine-tuning, as it is shown to yield better results for some cases in the paper.
This model was contributed by yuvalkirstain and oriram. The original code can be found here.
Documentation resources
Question answering task guide
SplinterConfig
[[autodoc]] SplinterConfig
SplinterTokenizer
[[autodoc]] SplinterTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SplinterTokenizerFast
[[autodoc]] SplinterTokenizerFast
SplinterModel
[[autodoc]] SplinterModel
- forward
SplinterForQuestionAnswering
[[autodoc]] SplinterForQuestionAnswering
- forward
SplinterForPreTraining
[[autodoc]] SplinterForPreTraining
- forward |
DistilBERT
Overview
The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, a
distilled version of BERT, and the paper DistilBERT, a
distilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a
small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than
bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language
understanding benchmark.
The abstract from the paper is the following:
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP),
operating these large models in on-the-edge and/or under constrained computational training or inference budgets
remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation
model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger
counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage
knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by
40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive
biases learned by larger models during pretraining, we introduce a triple loss combining language modeling,
distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we
demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device
study.
Tips:
DistilBERT doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or [SEP]).
DistilBERT doesn't have options to select the input positions (position_ids input). This could be added if
necessary though, just let us know if you need this option.
Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning itโs been trained to predict the same probabilities as the larger model. The actual objective is a combination of:
finding the same probabilities as the teacher model
predicting the masked tokens correctly (but no next-sentence objective)
a cosine similarity between the hidden states of the student and the teacher model
This model was contributed by victorsanh. This model jax version was
contributed by kamalkraj. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on Getting Started with Sentiment Analysis using Python with DistilBERT.
A blog post on how to train DistilBERT with Blurr for sequence classification.
A blog post on how to use Ray to tune DistilBERT hyperparameters.
A blog post on how to train DistilBERT with Hugging Face and Amazon SageMaker.
A notebook on how to finetune DistilBERT for multi-label classification. ๐
A notebook on how to finetune DistilBERT for multiclass classification with PyTorch. ๐
A notebook on how to finetune DistilBERT for text classification in TensorFlow. ๐
[DistilBertForSequenceClassification] is supported by this example script and notebook.
[TFDistilBertForSequenceClassification] is supported by this example script and notebook.
[FlaxDistilBertForSequenceClassification] is supported by this example script and notebook.
Text classification task guide
[DistilBertForTokenClassification] is supported by this example script and notebook.
[TFDistilBertForTokenClassification] is supported by this example script and notebook.
[FlaxDistilBertForTokenClassification] is supported by this example script.
Token classification chapter of the ๐ค Hugging Face Course.
Token classification task guide
[DistilBertForMaskedLM] is supported by this example script and notebook.
[TFDistilBertForMaskedLM] is supported by this example script and notebook.
[FlaxDistilBertForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the ๐ค Hugging Face Course.
Masked language modeling task guide
[DistilBertForQuestionAnswering] is supported by this example script and notebook.
[TFDistilBertForQuestionAnswering] is supported by this example script and notebook.
[FlaxDistilBertForQuestionAnswering] is supported by this example script.
Question answering chapter of the ๐ค Hugging Face Course.
Question answering task guide
Multiple choice
- [DistilBertForMultipleChoice] is supported by this example script and notebook.
- [TFDistilBertForMultipleChoice] is supported by this example script and notebook.
- Multiple choice task guide
โ๏ธ Optimization
A blog post on how to quantize DistilBERT with ๐ค Optimum and Intel.
A blog post on how Optimizing Transformers for GPUs with ๐ค Optimum.
A blog post on Optimizing Transformers with Hugging Face Optimum.
โก๏ธ Inference
A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia with DistilBERT.
A blog post on Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker.
๐ Deploy
A blog post on how to deploy DistilBERT on Google Cloud.
A blog post on how to deploy DistilBERT with Amazon SageMaker.
A blog post on how to Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module.
DistilBertConfig
[[autodoc]] DistilBertConfig
DistilBertTokenizer
[[autodoc]] DistilBertTokenizer
DistilBertTokenizerFast
[[autodoc]] DistilBertTokenizerFast
DistilBertModel
[[autodoc]] DistilBertModel
- forward
DistilBertForMaskedLM
[[autodoc]] DistilBertForMaskedLM
- forward
DistilBertForSequenceClassification
[[autodoc]] DistilBertForSequenceClassification
- forward
DistilBertForMultipleChoice
[[autodoc]] DistilBertForMultipleChoice
- forward
DistilBertForTokenClassification
[[autodoc]] DistilBertForTokenClassification
- forward
DistilBertForQuestionAnswering
[[autodoc]] DistilBertForQuestionAnswering
- forward
TFDistilBertModel
[[autodoc]] TFDistilBertModel
- call
TFDistilBertForMaskedLM
[[autodoc]] TFDistilBertForMaskedLM
- call
TFDistilBertForSequenceClassification
[[autodoc]] TFDistilBertForSequenceClassification
- call
TFDistilBertForMultipleChoice
[[autodoc]] TFDistilBertForMultipleChoice
- call
TFDistilBertForTokenClassification
[[autodoc]] TFDistilBertForTokenClassification
- call
TFDistilBertForQuestionAnswering
[[autodoc]] TFDistilBertForQuestionAnswering
- call
FlaxDistilBertModel
[[autodoc]] FlaxDistilBertModel
- call
FlaxDistilBertForMaskedLM
[[autodoc]] FlaxDistilBertForMaskedLM
- call
FlaxDistilBertForSequenceClassification
[[autodoc]] FlaxDistilBertForSequenceClassification
- call
FlaxDistilBertForMultipleChoice
[[autodoc]] FlaxDistilBertForMultipleChoice
- call
FlaxDistilBertForTokenClassification
[[autodoc]] FlaxDistilBertForTokenClassification
- call
FlaxDistilBertForQuestionAnswering
[[autodoc]] FlaxDistilBertForQuestionAnswering
- call |
Speech Encoder Decoder Models
The [SpeechEncoderDecoderModel] can be used to initialize a speech-to-text model
with any pretrained speech autoencoding model as the encoder (e.g. Wav2Vec2, Hubert) and any pretrained autoregressive model as the decoder.
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech
recognition and speech translation has e.g. been shown in Large-Scale Self- and Semi-Supervised Learning for Speech
Translation by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli,
Alexis Conneau.
An example of how to use a [SpeechEncoderDecoderModel] for inference can be seen in Speech2Text2.
Randomly initializing SpeechEncoderDecoderModel from model configurations.
[SpeechEncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [Wav2Vec2Model] configuration for the encoder
and the default [BertForCausalLM] configuration for the decoder.
thon
from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
config_encoder = Wav2Vec2Config()
config_decoder = BertConfig()
config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = SpeechEncoderDecoderModel(config=config)
Initialising SpeechEncoderDecoderModel from a pretrained encoder and a pretrained decoder.
[SpeechEncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, e.g. Wav2Vec2, Hubert can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing [SpeechEncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post.
To do so, the SpeechEncoderDecoderModel class provides a [SpeechEncoderDecoderModel.from_encoder_decoder_pretrained] method.
thon
from transformers import SpeechEncoderDecoderModel
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
"facebook/hubert-large-ll60k", "bert-base-uncased"
)
Loading an existing SpeechEncoderDecoderModel checkpoint and perform inference.
To load fine-tuned checkpoints of the SpeechEncoderDecoderModel class, [SpeechEncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers.
To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
thon
from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
import torch
load a fine-tuned speech translation model and corresponding processor
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
let's perform inference on a piece of English speech (which we'll translate to German)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
autoregressively generate transcription (uses greedy decoding by default)
generated_ids = model.generate(input_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heiรen zu kรถnnen.
Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: input_values (which are the
speech inputs) and labels (which are the input_ids of the encoded target sequence).
thon
from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel
from datasets import load_dataset
encoder_id = "facebook/wav2vec2-base-960h" # acoustic model encoder
decoder_id = "bert-base-uncased" # text decoder
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id)
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
load an audio input and pre-process (normalise mean/std to 0/1)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values
load its corresponding transcription and tokenize to generate labels
labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids
the forward function automatically creates the correct decoder_input_ids
loss = model(**input_features).loss
loss.backward()
SpeechEncoderDecoderConfig
[[autodoc]] SpeechEncoderDecoderConfig
SpeechEncoderDecoderModel
[[autodoc]] SpeechEncoderDecoderModel
- forward
- from_encoder_decoder_pretrained
FlaxSpeechEncoderDecoderModel
[[autodoc]] FlaxSpeechEncoderDecoderModel
- call
- from_encoder_decoder_pretrained |
LiLT
Overview
The LiLT model was proposed in LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding by Jiapeng Wang, Lianwen Jin, Kai Ding.
LiLT allows to combine any pre-trained RoBERTa text encoder with a lightweight Layout Transformer, to enable LayoutLM-like document understanding for many
languages.
The abstract from the paper is the following:
Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure.
Tips:
To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the hub, refer to this guide.
The script will result in config.json and pytorch_model.bin files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account):
from transformers import LiltModel
model = LiltModel.from_pretrained("path_to_your_files")
model.push_to_hub("name_of_repo_on_the_hub")
When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer.
As lilt-roberta-en-base uses the same vocabulary as LayoutLMv3, one can use [LayoutLMv3TokenizerFast] to prepare data for the model.
The same is true for lilt-roberta-en-base: one can use [LayoutXLMTokenizerFast] for that model.
LiLT architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with LiLT.
Demo notebooks for LiLT can be found here.
Documentation resources
- Text classification task guide
- Token classification task guide
- Question answering task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LiltConfig
[[autodoc]] LiltConfig
LiltModel
[[autodoc]] LiltModel
- forward
LiltForSequenceClassification
[[autodoc]] LiltForSequenceClassification
- forward
LiltForTokenClassification
[[autodoc]] LiltForTokenClassification
- forward
LiltForQuestionAnswering
[[autodoc]] LiltForQuestionAnswering
- forward |
Open-Llama
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.31.0.
You can do so by running the following command: pip install -U transformers==4.31.0.
This model differs from the OpenLLaMA models on the Hugging Face Hub, which primarily use the LLaMA architecture.
Overview
The Open-Llama model was proposed in Open-Llama project by community developer s-JoL.
The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PaLM.
And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks.
This model was contributed by s-JoL.
The original code can be found Open-Llama.
Checkpoint and usage can be found at s-JoL/Open-Llama-V1.
OpenLlamaConfig
[[autodoc]] OpenLlamaConfig
OpenLlamaModel
[[autodoc]] OpenLlamaModel
- forward
OpenLlamaForCausalLM
[[autodoc]] OpenLlamaForCausalLM
- forward
OpenLlamaForSequenceClassification
[[autodoc]] OpenLlamaForSequenceClassification
- forward |
BertJapanese
Overview
The BERT models trained on Japanese text.
There are models with two different tokenization methods:
Tokenize with MeCab and WordPiece. This requires some extra dependencies, fugashi which is a wrapper around MeCab.
Tokenize into characters.
To use MecabTokenizer, you should pip install transformers["ja"] (or pip install -e .["ja"] if you install
from source) to install dependencies.
See details on cl-tohoku repository.
Example of using a model with MeCab and WordPiece tokenization:
thon
import torch
from transformers import AutoModel, AutoTokenizer
bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
Input Japanese Text
line = "ๅพ่ผฉใฏ็ซใงใใใ"
inputs = tokenizer(line, return_tensors="pt")
print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] ๅพ่ผฉ ใฏ ็ซ ใง ใใ ใ [SEP]
outputs = bertjapanese(**inputs)
Example of using a model with Character tokenization:
thon
bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")
Input Japanese Text
line = "ๅพ่ผฉใฏ็ซใงใใใ"
inputs = tokenizer(line, return_tensors="pt")
print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] ๅพ ่ผฉ ใฏ ็ซ ใง ใ ใ ใ [SEP]
outputs = bertjapanese(**inputs)
Tips:
This implementation is the same as BERT, except for tokenization method. Refer to the documentation of BERT for more usage examples.
This model was contributed by cl-tohoku.
BertJapaneseTokenizer
[[autodoc]] BertJapaneseTokenizer |
ConvNeXT
Overview
The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
The abstract from the paper is the following:
The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers
(e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide
variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive
biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design
of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models
dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy
and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.
Tips:
See the code examples below each model regarding usage.
ConvNeXT architecture. Taken from the original paper.
This model was contributed by nielsr. TensorFlow version of the model was contributed by ariG23498,
gante, and sayakpaul (equal contribution). The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with ConvNeXT.
[ConvNextForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextConfig
[[autodoc]] ConvNextConfig
ConvNextFeatureExtractor
[[autodoc]] ConvNextFeatureExtractor
ConvNextImageProcessor
[[autodoc]] ConvNextImageProcessor
- preprocess
ConvNextModel
[[autodoc]] ConvNextModel
- forward
ConvNextForImageClassification
[[autodoc]] ConvNextForImageClassification
- forward
TFConvNextModel
[[autodoc]] TFConvNextModel
- call
TFConvNextForImageClassification
[[autodoc]] TFConvNextForImageClassification
- call |
MobileViTV2
Overview
The MobileViTV2 model was proposed in Separable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari.
MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
The abstract from the paper is the following:
Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2ร faster on a mobile device.
Tips:
MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map.
One can use [MobileViTImageProcessor] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB).
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
This model was contributed by shehan97.
The original code can be found here.
MobileViTV2Config
[[autodoc]] MobileViTV2Config
MobileViTV2Model
[[autodoc]] MobileViTV2Model
- forward
MobileViTV2ForImageClassification
[[autodoc]] MobileViTV2ForImageClassification
- forward
MobileViTV2ForSemanticSegmentation
[[autodoc]] MobileViTV2ForSemanticSegmentation
- forward |
YOSO
Overview
The YOSO model was proposed in You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling
by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention
via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with
a single hash.
The abstract from the paper is the following:
Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is
the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically
on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling
attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear.
We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random
variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant).
This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of
LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence
length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark,
for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable
speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL
Tips:
The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times
in parallel on a GPU.
The kernels provide a fast_hash function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these
hash codes, the lsh_cumulation function approximates self-attention via LSH-based Bernoulli sampling.
To use the custom kernels, the user should set config.use_expectation = False. To ensure that the kernels are compiled successfully,
the user must install the correct version of PyTorch and cudatoolkit. By default, config.use_expectation = True, which uses YOSO-E and
does not require compiling CUDA kernels.
YOSO Attention Algorithm. Taken from the original paper.
This model was contributed by novice03. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
YosoConfig
[[autodoc]] YosoConfig
YosoModel
[[autodoc]] YosoModel
- forward
YosoForMaskedLM
[[autodoc]] YosoForMaskedLM
- forward
YosoForSequenceClassification
[[autodoc]] YosoForSequenceClassification
- forward
YosoForMultipleChoice
[[autodoc]] YosoForMultipleChoice
- forward
YosoForTokenClassification
[[autodoc]] YosoForTokenClassification
- forward
YosoForQuestionAnswering
[[autodoc]] YosoForQuestionAnswering
- forward |
DeBERTa-v2
Overview
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
The abstract from the paper is the following:
Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.
The following information is visible directly on the original implementation
repository. DeBERTa v2 is the second version of the DeBERTa model. It includes
the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can
find more details about this submission in the authors'
blog
New in v2:
Vocabulary In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data.
Instead of a GPT2-based tokenizer, the tokenizer is now
sentencepiece-based tokenizer.
nGiE(nGram Induced Input Encoding) The DeBERTa-v2 model uses an additional convolution layer aside with the first
transformer layer to better learn the local dependency of input tokens.
Sharing position projection matrix with content projection matrix in attention layer Based on previous
experiments, this can save parameters without affecting the performance.
Apply bucket to encode relative positions The DeBERTa-v2 model uses log bucket to encode relative positions
similar to T5.
900M model & 1.5B model Two additional model sizes are available: 900M and 1.5B, which significantly improves the
performance of downstream tasks.
This model was contributed by DeBERTa. This model TF 2.0 implementation was
contributed by kamalkraj. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
DebertaV2Config
[[autodoc]] DebertaV2Config
DebertaV2Tokenizer
[[autodoc]] DebertaV2Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
DebertaV2TokenizerFast
[[autodoc]] DebertaV2TokenizerFast
- build_inputs_with_special_tokens
- create_token_type_ids_from_sequences
DebertaV2Model
[[autodoc]] DebertaV2Model
- forward
DebertaV2PreTrainedModel
[[autodoc]] DebertaV2PreTrainedModel
- forward
DebertaV2ForMaskedLM
[[autodoc]] DebertaV2ForMaskedLM
- forward
DebertaV2ForSequenceClassification
[[autodoc]] DebertaV2ForSequenceClassification
- forward
DebertaV2ForTokenClassification
[[autodoc]] DebertaV2ForTokenClassification
- forward
DebertaV2ForQuestionAnswering
[[autodoc]] DebertaV2ForQuestionAnswering
- forward
DebertaV2ForMultipleChoice
[[autodoc]] DebertaV2ForMultipleChoice
- forward
TFDebertaV2Model
[[autodoc]] TFDebertaV2Model
- call
TFDebertaV2PreTrainedModel
[[autodoc]] TFDebertaV2PreTrainedModel
- call
TFDebertaV2ForMaskedLM
[[autodoc]] TFDebertaV2ForMaskedLM
- call
TFDebertaV2ForSequenceClassification
[[autodoc]] TFDebertaV2ForSequenceClassification
- call
TFDebertaV2ForTokenClassification
[[autodoc]] TFDebertaV2ForTokenClassification
- call
TFDebertaV2ForQuestionAnswering
[[autodoc]] TFDebertaV2ForQuestionAnswering
- call |
MVP
Overview
The MVP model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
According to the abstract,
MVP follows a standard Transformer encoder-decoder architecture.
MVP is supervised pre-trained using labeled datasets.
MVP also has task-specific soft prompts to stimulate the model's capacity in performing a certain task.
MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering.
Tips:
- We have released a series of models here, including MVP, MVP with task-specific prompts, and multi-task pre-trained variants.
- If you want to use a model without prompts (standard Transformer), you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp').
- If you want to use a model with task-specific prompts, such as summarization, you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp-summarization').
- Our model supports lightweight prompt tuning following Prefix-tuning with method set_lightweight_tuning().
This model was contributed by Tianyi Tang. The detailed information and instructions can be found here.
Examples
For summarization, it is an example to use MVP and MVP with summarization-specific prompts.
thon
from transformers import MvpTokenizer, MvpForConditionalGeneration
tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_prompt = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization")
inputs = tokenizer(
"Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
return_tensors="pt",
)
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"]
generated_ids = model_with_prompt.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]
For data-to-text generation, it is an example to use MVP and multi-task pre-trained variants.
thon
from transformers import MvpTokenizerFast, MvpForConditionalGeneration
tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_mtl = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
inputs = tokenizer(
"Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
return_tensors="pt",
)
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic']
generated_ids = model_with_mtl.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
For lightweight tuning, i.e., fixing the model and only tuning prompts, you can load MVP with randomly initialized prompts or with task-specific prompts. Our code also supports Prefix-tuning with BART following the original paper.
thon
from transformers import MvpForConditionalGeneration
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp", use_prompt=True)
the number of trainable parameters (full tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
468116832
lightweight tuning with randomly initialized prompts
model.set_lightweight_tuning()
the number of trainable parameters (lightweight tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
61823328
lightweight tuning with task-specific prompts
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
model.set_lightweight_tuning()
original lightweight Prefix-tuning
model = MvpForConditionalGeneration.from_pretrained("facebook/bart-large", use_prompt=True)
model.set_lightweight_tuning()
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide
MvpConfig
[[autodoc]] MvpConfig
MvpTokenizer
[[autodoc]] MvpTokenizer
MvpTokenizerFast
[[autodoc]] MvpTokenizerFast
MvpModel
[[autodoc]] MvpModel
- forward
MvpForConditionalGeneration
[[autodoc]] MvpForConditionalGeneration
- forward
MvpForSequenceClassification
[[autodoc]] MvpForSequenceClassification
- forward
MvpForQuestionAnswering
[[autodoc]] MvpForQuestionAnswering
- forward
MvpForCausalLM
[[autodoc]] MvpForCausalLM
- forward |
FSMT
DISCLAIMER: If you see something strange, file a Github Issue and assign
@stas00.
Overview
FSMT (FairSeq MachineTranslation) models were introduced in Facebook FAIR's WMT19 News Translation Task Submission by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.
The abstract of the paper is the following:
This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two
language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from
last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling
toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes,
as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific
data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the
human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations.
This system improves upon our WMT'18 submission by 4.5 BLEU points.
This model was contributed by stas. The original code can be found
here.
Implementation Notes
FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens
either. Its tokenizer is very similar to [XLMTokenizer] and the main model is derived from
[BartModel].
FSMTConfig
[[autodoc]] FSMTConfig
FSMTTokenizer
[[autodoc]] FSMTTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
FSMTModel
[[autodoc]] FSMTModel
- forward
FSMTForConditionalGeneration
[[autodoc]] FSMTForConditionalGeneration
- forward |
VAN
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The VAN model was proposed in Visual Attention Network by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.
The abstract from the paper is the following:
While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. Code is available at this https URL.
Tips:
VAN does not have an embedding layer, thus the hidden_states will have a length equal to the number of stages.
The figure below illustrates the architecture of a Visual Aattention Layer. Taken from the original paper.
This model was contributed by Francesco. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with VAN.
[VanForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
VanConfig
[[autodoc]] VanConfig
VanModel
[[autodoc]] VanModel
- forward
VanForImageClassification
[[autodoc]] VanForImageClassification
- forward |
DeiT
This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue.
Overview
The DeiT model was proposed in Training data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre
Sablayrolles, Hervรฉ Jรฉgou. The Vision Transformer (ViT) introduced in Dosovitskiy et al., 2020 has shown that one can match or even outperform existing convolutional neural
networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on
expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more
efficiently trained transformers for image classification, requiring far less data and far less computing resources
compared to the original ViT models.
The abstract from the paper is the following:
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image
classification. However, these visual transformers are pre-trained with hundreds of millions of images using an
expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free
transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision
transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external
data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation
token ensuring that the student learns from the teacher through attention. We show the interest of this token-based
distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets
for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and
models.
Tips:
Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the
DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with
the class ([CLS]) and patch tokens through the self-attention layers.
There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a
prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction
head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the
distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the
distillation head and the label predicted by the teacher). At inference time, one takes the average prediction
between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a
teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to
[DeiTForImageClassification] and (2) corresponds to
[DeiTForImageClassificationWithTeacher].
Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is
trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results.
All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into
[ViTModel] or [ViTForImageClassification]. Techniques like data
augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
(while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes):
facebook/deit-tiny-patch16-224, facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and
facebook/deit-base-patch16-384. Note that one should use [DeiTImageProcessor] in order to
prepare images for the model.
This model was contributed by nielsr. The TensorFlow version of this model was added by amyeroberts.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with DeiT.
[DeiTForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
[DeiTForMaskedImageModeling] is supported by this example script.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DeiTConfig
[[autodoc]] DeiTConfig
DeiTFeatureExtractor
[[autodoc]] DeiTFeatureExtractor
- call
DeiTImageProcessor
[[autodoc]] DeiTImageProcessor
- preprocess
DeiTModel
[[autodoc]] DeiTModel
- forward
DeiTForMaskedImageModeling
[[autodoc]] DeiTForMaskedImageModeling
- forward
DeiTForImageClassification
[[autodoc]] DeiTForImageClassification
- forward
DeiTForImageClassificationWithTeacher
[[autodoc]] DeiTForImageClassificationWithTeacher
- forward
TFDeiTModel
[[autodoc]] TFDeiTModel
- call
TFDeiTForMaskedImageModeling
[[autodoc]] TFDeiTForMaskedImageModeling
- call
TFDeiTForImageClassification
[[autodoc]] TFDeiTForImageClassification
- call
TFDeiTForImageClassificationWithTeacher
[[autodoc]] TFDeiTForImageClassificationWithTeacher
- call |
MatCha
Overview
MatCha has been proposed in the paper MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering, from Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
The abstract of the paper states the following:
Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.
Model description
MatCha is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
MatCha is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer.
Usage
Currently 6 checkpoints are available for MatCha:
google/matcha: the base MatCha model, used to fine-tune MatCha on downstream tasks
google/matcha-chartqa: MatCha model fine-tuned on ChartQA dataset. It can be used to answer questions about charts.
google/matcha-plotqa-v1: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots.
google/matcha-plotqa-v2: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots.
google/matcha-chart2text-statista: MatCha model fine-tuned on Statista dataset.
google/matcha-chart2text-pew: MatCha model fine-tuned on Pew dataset.
The models finetuned on chart2text-pew and chart2text-statista are more suited for summarization, whereas the models finetuned on plotqa and chartqa are more suited for question answering.
You can use these models as follows (example on a ChatQA dataset):
thon
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/matcha-chartqa").to(0)
processor = AutoProcessor.from_pretrained("google/matcha-chartqa")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Is the sum of all 4 places greater than Laos?", return_tensors="pt").to(0)
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
Fine-tuning
To fine-tune MatCha, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faste convergence:
thon
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
``` |
BigBirdPegasus
Overview
The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
Tips:
For an in-detail explanation on how BigBird's attention works, see this blog post.
BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using
original_full is advised as there is no benefit in using block_sparse attention.
The code currently uses window size of 3 blocks and 2 global blocks.
Sequence length must be divisible by block size.
Current implementation supports only ITC.
Current implementation doesn't support num_random_blocks = 0.
BigBirdPegasus uses the PegasusTokenizer.
BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
The original code can be found here.
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Translation task guide
Summarization task guide
BigBirdPegasusConfig
[[autodoc]] BigBirdPegasusConfig
- all
BigBirdPegasusModel
[[autodoc]] BigBirdPegasusModel
- forward
BigBirdPegasusForConditionalGeneration
[[autodoc]] BigBirdPegasusForConditionalGeneration
- forward
BigBirdPegasusForSequenceClassification
[[autodoc]] BigBirdPegasusForSequenceClassification
- forward
BigBirdPegasusForQuestionAnswering
[[autodoc]] BigBirdPegasusForQuestionAnswering
- forward
BigBirdPegasusForCausalLM
[[autodoc]] BigBirdPegasusForCausalLM
- forward |
TVLT
Overview
The TVLT model was proposed in TVLT: Textless Vision-Language Transformer
by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal (the first three authors contributed equally). The Textless Vision-Language Transformer (TVLT) is a model that uses raw visual and audio inputs for vision-and-language representation learning, without using text-specific modules such as tokenization or automatic speech recognition (ASR). It can perform various audiovisual and vision-language tasks like retrieval, question answering, etc.
The abstract from the paper is the following:
In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text.
Tips:
TVLT is a model that takes both pixel_values and audio_values as input. One can use [TvltProcessor] to prepare data for the model.
This processor wraps an image processor (for the image/video modality) and an audio feature extractor (for the audio modality) into one.
TVLT is trained with images/videos and audios of various sizes: the authors resize and crop the input images/videos to 224 and limit the length of audio spectrogram to 2048. To make batching of videos and audios possible, the authors use a pixel_mask that indicates which pixels are real/padding and audio_mask that indicates which audio values are real/padding.
The design of TVLT is very similar to that of a standard Vision Transformer (ViT) and masked autoencoder (MAE) as in ViTMAE. The difference is that the model includes embedding layers for the audio modality.
The PyTorch version of this model is only available in torch 1.10 and higher.
TVLT architecture. Taken from the https://arxiv.org/abs/2102.03334">original paper.
The original code can be found here. This model was contributed by Zineng Tang.
TvltConfig
[[autodoc]] TvltConfig
TvltProcessor
[[autodoc]] TvltProcessor
- call
TvltImageProcessor
[[autodoc]] TvltImageProcessor
- preprocess
TvltFeatureExtractor
[[autodoc]] TvltFeatureExtractor
- call
TvltModel
[[autodoc]] TvltModel
- forward
TvltForPreTraining
[[autodoc]] TvltForPreTraining
- forward
TvltForAudioVisualClassification
[[autodoc]] TvltForAudioVisualClassification
- forward |
GroupViT
Overview
The GroupViT model was proposed in GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
Inspired by CLIP, GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
The abstract from the paper is the following:
Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.
Tips:
You may specify output_segmentation=True in the forward of GroupViTModel to get the segmentation logits of input texts.
This model was contributed by xvjiarui. The TensorFlow version was contributed by ariG23498 with the help of Yih-Dar SHIEH, Amy Roberts, and Joao Gante.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with GroupViT.
The quickest way to get started with GroupViT is by checking the example notebooks (which showcase zero-shot segmentation inference).
One can also check out the HuggingFace Spaces demo to play with GroupViT.
GroupViTConfig
[[autodoc]] GroupViTConfig
- from_text_vision_configs
GroupViTTextConfig
[[autodoc]] GroupViTTextConfig
GroupViTVisionConfig
[[autodoc]] GroupViTVisionConfig
GroupViTModel
[[autodoc]] GroupViTModel
- forward
- get_text_features
- get_image_features
GroupViTTextModel
[[autodoc]] GroupViTTextModel
- forward
GroupViTVisionModel
[[autodoc]] GroupViTVisionModel
- forward
TFGroupViTModel
[[autodoc]] TFGroupViTModel
- call
- get_text_features
- get_image_features
TFGroupViTTextModel
[[autodoc]] TFGroupViTTextModel
- call
TFGroupViTVisionModel
[[autodoc]] TFGroupViTVisionModel
- call |
DINOv2
Overview
The DINOv2 model was proposed in DINOv2: Learning Robust Visual Features without Supervision by
Maxime Oquab, Timothรฉe Darcet, Thรฉo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervรฉ Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.
DINOv2 is an upgrade of DINO, a self-supervised method applied on Vision Transformers. This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning.
The abstract from the paper is the following:
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.
Tips:
One can use [AutoImageProcessor] class to prepare images for the model.
This model was contributed by nielsr.
The original code can be found here.
Dinov2Config
[[autodoc]] Dinov2Config
Dinov2Model
[[autodoc]] Dinov2Model
- forward
Dinov2ForImageClassification
[[autodoc]] Dinov2ForImageClassification
- forward |
ELECTRA
Overview
The ELECTRA model was proposed in the paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators. ELECTRA is a new pretraining approach which trains two
transformer models: the generator and the discriminator. The generator's role is to replace tokens in a sequence, and
is therefore trained as a masked language model. The discriminator, which is the model we're interested in, tries to
identify which tokens were replaced by the generator in the sequence.
The abstract from the paper is the following:
Masked language modeling (MLM) pretraining methods such as BERT corrupt the input by replacing some tokens with [MASK]
and then train a model to reconstruct the original tokens. While they produce good results when transferred to
downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a
more sample-efficient pretraining task called replaced token detection. Instead of masking the input, our approach
corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead
of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that
predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments
demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens
rather than just the small subset that was masked out. As a result, the contextual representations learned by our
approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are
particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained
using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale,
where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when
using the same amount of compute.
Tips:
ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The
only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller,
while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their
embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection
layer is used.
ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps.
The ELECTRA checkpoints saved using Google Research's implementation
contain both the generator and discriminator. The conversion script requires the user to name which model to export
into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all
available ELECTRA models, however. This means that the discriminator may be loaded in the
[ElectraForMaskedLM] model, and the generator may be loaded in the
[ElectraForPreTraining] model (the classification head will be randomly initialized as it
doesn't exist in the generator).
This model was contributed by lysandre. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
ElectraConfig
[[autodoc]] ElectraConfig
ElectraTokenizer
[[autodoc]] ElectraTokenizer
ElectraTokenizerFast
[[autodoc]] ElectraTokenizerFast
Electra specific outputs
[[autodoc]] models.electra.modeling_electra.ElectraForPreTrainingOutput
[[autodoc]] models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput
ElectraModel
[[autodoc]] ElectraModel
- forward
ElectraForPreTraining
[[autodoc]] ElectraForPreTraining
- forward
ElectraForCausalLM
[[autodoc]] ElectraForCausalLM
- forward
ElectraForMaskedLM
[[autodoc]] ElectraForMaskedLM
- forward
ElectraForSequenceClassification
[[autodoc]] ElectraForSequenceClassification
- forward
ElectraForMultipleChoice
[[autodoc]] ElectraForMultipleChoice
- forward
ElectraForTokenClassification
[[autodoc]] ElectraForTokenClassification
- forward
ElectraForQuestionAnswering
[[autodoc]] ElectraForQuestionAnswering
- forward
TFElectraModel
[[autodoc]] TFElectraModel
- call
TFElectraForPreTraining
[[autodoc]] TFElectraForPreTraining
- call
TFElectraForMaskedLM
[[autodoc]] TFElectraForMaskedLM
- call
TFElectraForSequenceClassification
[[autodoc]] TFElectraForSequenceClassification
- call
TFElectraForMultipleChoice
[[autodoc]] TFElectraForMultipleChoice
- call
TFElectraForTokenClassification
[[autodoc]] TFElectraForTokenClassification
- call
TFElectraForQuestionAnswering
[[autodoc]] TFElectraForQuestionAnswering
- call
FlaxElectraModel
[[autodoc]] FlaxElectraModel
- call
FlaxElectraForPreTraining
[[autodoc]] FlaxElectraForPreTraining
- call
FlaxElectraForCausalLM
[[autodoc]] FlaxElectraForCausalLM
- call
FlaxElectraForMaskedLM
[[autodoc]] FlaxElectraForMaskedLM
- call
FlaxElectraForSequenceClassification
[[autodoc]] FlaxElectraForSequenceClassification
- call
FlaxElectraForMultipleChoice
[[autodoc]] FlaxElectraForMultipleChoice
- call
FlaxElectraForTokenClassification
[[autodoc]] FlaxElectraForTokenClassification
- call
FlaxElectraForQuestionAnswering
[[autodoc]] FlaxElectraForQuestionAnswering
- call |
VideoMAE
Overview
The VideoMAE model was proposed in VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
VideoMAE extends masked auto encoders (MAE) to video, claiming state-of-the-art performance on several video classification benchmarks.
The abstract from the paper is the following:
Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data.
Tips:
One can use [VideoMAEImageProcessor] to prepare videos for the model. It will resize + normalize all frames of a video for you.
[VideoMAEForPreTraining] includes the decoder on top for self-supervised pre-training.
VideoMAE pre-training. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with VideoMAE. If
you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll
review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Video classification
- A notebook that shows how
to fine-tune a VideoMAE model on a custom dataset.
- Video classification task guide
- A ๐ค Space showing how to perform inference with a video classification model.
VideoMAEConfig
[[autodoc]] VideoMAEConfig
VideoMAEFeatureExtractor
[[autodoc]] VideoMAEFeatureExtractor
- call
VideoMAEImageProcessor
[[autodoc]] VideoMAEImageProcessor
- preprocess
VideoMAEModel
[[autodoc]] VideoMAEModel
- forward
VideoMAEForPreTraining
[[autodoc]] transformers.VideoMAEForPreTraining
- forward
VideoMAEForVideoClassification
[[autodoc]] transformers.VideoMAEForVideoClassification
- forward |
LUKE
Overview
The LUKE model was proposed in LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda and Yuji Matsumoto.
It is based on RoBERTa and adds entity embeddings as well as an entity-aware self-attention mechanism, which helps
improve performance on various downstream tasks involving reasoning about entities such as named entity recognition,
extractive and cloze-style question answering, entity typing, and relation classification.
The abstract from the paper is the following:
Entity representations are useful in natural language tasks involving entities. In this paper, we propose new
pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed
model treats words and entities in a given text as independent tokens, and outputs contextualized representations of
them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves
predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also
propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the
transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model
achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains
state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification),
CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question
answering).
Tips:
This implementation is the same as [RobertaModel] with the addition of entity embeddings as well
as an entity-aware self-attention mechanism, which improves performance on tasks involving reasoning about entities.
LUKE treats entities as input tokens; therefore, it takes entity_ids, entity_attention_mask,
entity_token_type_ids and entity_position_ids as extra input. You can obtain those using
[LukeTokenizer].
[LukeTokenizer] takes entities and entity_spans (character-based start and end
positions of the entities in the input text) as extra input. entities typically consist of [MASK] entities or
Wikipedia entities. The brief description when inputting these entities are as follows:
Inputting [MASK] entities to compute entity representations: The [MASK] entity is used to mask entities to be
predicted during pretraining. When LUKE receives the [MASK] entity, it tries to predict the original entity by
gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address
downstream tasks requiring the information of entities in text such as entity typing, relation classification, and
named entity recognition.
Inputting Wikipedia entities to compute knowledge-enhanced token representations: LUKE learns rich information
(or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By
using Wikipedia entities as input tokens, LUKE outputs token representations enriched by the information stored in
the embeddings of these entities. This is particularly effective for tasks requiring real-world knowledge, such as
question answering.
There are three head models for the former use case:
[LukeForEntityClassification], for tasks to classify a single entity in an input text such as
entity typing, e.g. the Open Entity dataset.
This model places a linear head on top of the output entity representation.
[LukeForEntityPairClassification], for tasks to classify the relationship between two entities
such as relation classification, e.g. the TACRED dataset. This
model places a linear head on top of the concatenated output representation of the pair of given entities.
[LukeForEntitySpanClassification], for tasks to classify the sequence of entity spans, such as
named entity recognition (NER). This model places a linear head on top of the output entity representations. You
can address NER using this model by inputting all possible entity spans in the text to the model.
[LukeTokenizer] has a task argument, which enables you to easily create an input to these
head models by specifying task="entity_classification", task="entity_pair_classification", or
task="entity_span_classification". Please refer to the example code of each head models.
A demo notebook on how to fine-tune [LukeForEntityPairClassification] for relation
classification can be found here.
There are also 3 notebooks available, which showcase how you can reproduce the results as reported in the paper with
the HuggingFace implementation of LUKE. They can be found here.
Example:
thon
from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification
model = LukeModel.from_pretrained("studio-ousia/luke-base")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base")
Example 1: Computing the contextualized entity representation corresponding to the entity mention "Beyoncรฉ"
text = "Beyoncรฉ lives in Los Angeles."
entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncรฉ"
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs = model(**inputs)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state
Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations
entities = [
"Beyoncรฉ",
"Los Angeles",
] # Wikipedia entity titles corresponding to the entity mentions "Beyoncรฉ" and "Los Angeles"
entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncรฉ" and "Los Angeles"
inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs = model(**inputs)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state
Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncรฉ" and "Los Angeles"
inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = int(logits[0].argmax())
print("Predicted class:", model.config.id2label[predicted_class_idx])
This model was contributed by ikuyamada and nielsr. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
LukeConfig
[[autodoc]] LukeConfig
LukeTokenizer
[[autodoc]] LukeTokenizer
- call
- save_vocabulary
LukeModel
[[autodoc]] LukeModel
- forward
LukeForMaskedLM
[[autodoc]] LukeForMaskedLM
- forward
LukeForEntityClassification
[[autodoc]] LukeForEntityClassification
- forward
LukeForEntityPairClassification
[[autodoc]] LukeForEntityPairClassification
- forward
LukeForEntitySpanClassification
[[autodoc]] LukeForEntitySpanClassification
- forward
LukeForSequenceClassification
[[autodoc]] LukeForSequenceClassification
- forward
LukeForMultipleChoice
[[autodoc]] LukeForMultipleChoice
- forward
LukeForTokenClassification
[[autodoc]] LukeForTokenClassification
- forward
LukeForQuestionAnswering
[[autodoc]] LukeForQuestionAnswering
- forward |
Auto Classes
In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you
are supplying to the from_pretrained() method. AutoClasses are here to do this job for you so that you
automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary.
Instantiating one of [AutoConfig], [AutoModel], and
[AutoTokenizer] will directly create a class of the relevant architecture. For instance
python
model = AutoModel.from_pretrained("bert-base-cased")
will create a model that is an instance of [BertModel].
There is one class of AutoModel for each task, and for each backend (PyTorch, TensorFlow, or Flax).
Extending the Auto Classes
Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a
custom class of model NewModel, make sure you have a NewModelConfig then you can add those to the auto
classes like this:
thon
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
You will then be able to use the auto classes like you would usually do!
If your NewModelConfig is a subclass of [~transformer.PretrainedConfig], make sure its
model_type attribute is set to the same key you use when registering the config (here "new-model").
Likewise, if your NewModel is a subclass of [PreTrainedModel], make sure its
config_class attribute is set to the same class you use when registering the model (here
NewModelConfig).
AutoConfig
[[autodoc]] AutoConfig
AutoTokenizer
[[autodoc]] AutoTokenizer
AutoFeatureExtractor
[[autodoc]] AutoFeatureExtractor
AutoImageProcessor
[[autodoc]] AutoImageProcessor
AutoProcessor
[[autodoc]] AutoProcessor
Generic model classes
The following auto classes are available for instantiating a base model class without a specific head.
AutoModel
[[autodoc]] AutoModel
TFAutoModel
[[autodoc]] TFAutoModel
FlaxAutoModel
[[autodoc]] FlaxAutoModel
Generic pretraining classes
The following auto classes are available for instantiating a model with a pretraining head.
AutoModelForPreTraining
[[autodoc]] AutoModelForPreTraining
TFAutoModelForPreTraining
[[autodoc]] TFAutoModelForPreTraining
FlaxAutoModelForPreTraining
[[autodoc]] FlaxAutoModelForPreTraining
Natural Language Processing
The following auto classes are available for the following natural language processing tasks.
AutoModelForCausalLM
[[autodoc]] AutoModelForCausalLM
TFAutoModelForCausalLM
[[autodoc]] TFAutoModelForCausalLM
FlaxAutoModelForCausalLM
[[autodoc]] FlaxAutoModelForCausalLM
AutoModelForMaskedLM
[[autodoc]] AutoModelForMaskedLM
TFAutoModelForMaskedLM
[[autodoc]] TFAutoModelForMaskedLM
FlaxAutoModelForMaskedLM
[[autodoc]] FlaxAutoModelForMaskedLM
AutoModelForMaskGeneration
[[autodoc]] AutoModelForMaskGeneration
TFAutoModelForMaskGeneration
[[autodoc]] TFAutoModelForMaskGeneration
AutoModelForSeq2SeqLM
[[autodoc]] AutoModelForSeq2SeqLM
TFAutoModelForSeq2SeqLM
[[autodoc]] TFAutoModelForSeq2SeqLM
FlaxAutoModelForSeq2SeqLM
[[autodoc]] FlaxAutoModelForSeq2SeqLM
AutoModelForSequenceClassification
[[autodoc]] AutoModelForSequenceClassification
TFAutoModelForSequenceClassification
[[autodoc]] TFAutoModelForSequenceClassification
FlaxAutoModelForSequenceClassification
[[autodoc]] FlaxAutoModelForSequenceClassification
AutoModelForMultipleChoice
[[autodoc]] AutoModelForMultipleChoice
TFAutoModelForMultipleChoice
[[autodoc]] TFAutoModelForMultipleChoice
FlaxAutoModelForMultipleChoice
[[autodoc]] FlaxAutoModelForMultipleChoice
AutoModelForNextSentencePrediction
[[autodoc]] AutoModelForNextSentencePrediction
TFAutoModelForNextSentencePrediction
[[autodoc]] TFAutoModelForNextSentencePrediction
FlaxAutoModelForNextSentencePrediction
[[autodoc]] FlaxAutoModelForNextSentencePrediction
AutoModelForTokenClassification
[[autodoc]] AutoModelForTokenClassification
TFAutoModelForTokenClassification
[[autodoc]] TFAutoModelForTokenClassification
FlaxAutoModelForTokenClassification
[[autodoc]] FlaxAutoModelForTokenClassification
AutoModelForQuestionAnswering
[[autodoc]] AutoModelForQuestionAnswering
TFAutoModelForQuestionAnswering
[[autodoc]] TFAutoModelForQuestionAnswering
FlaxAutoModelForQuestionAnswering
[[autodoc]] FlaxAutoModelForQuestionAnswering
AutoModelForTextEncoding
[[autodoc]] AutoModelForTextEncoding
TFAutoModelForTextEncoding
[[autodoc]] TFAutoModelForTextEncoding
Computer vision
The following auto classes are available for the following computer vision tasks.
AutoModelForDepthEstimation
[[autodoc]] AutoModelForDepthEstimation
AutoModelForImageClassification
[[autodoc]] AutoModelForImageClassification
TFAutoModelForImageClassification
[[autodoc]] TFAutoModelForImageClassification
FlaxAutoModelForImageClassification
[[autodoc]] FlaxAutoModelForImageClassification
AutoModelForVideoClassification
[[autodoc]] AutoModelForVideoClassification
AutoModelForMaskedImageModeling
[[autodoc]] AutoModelForMaskedImageModeling
TFAutoModelForMaskedImageModeling
[[autodoc]] TFAutoModelForMaskedImageModeling
AutoModelForObjectDetection
[[autodoc]] AutoModelForObjectDetection
AutoModelForImageSegmentation
[[autodoc]] AutoModelForImageSegmentation
AutoModelForSemanticSegmentation
[[autodoc]] AutoModelForSemanticSegmentation
TFAutoModelForSemanticSegmentation
[[autodoc]] TFAutoModelForSemanticSegmentation
AutoModelForInstanceSegmentation
[[autodoc]] AutoModelForInstanceSegmentation
AutoModelForUniversalSegmentation
[[autodoc]] AutoModelForUniversalSegmentation
AutoModelForZeroShotImageClassification
[[autodoc]] AutoModelForZeroShotImageClassification
TFAutoModelForZeroShotImageClassification
[[autodoc]] TFAutoModelForZeroShotImageClassification
AutoModelForZeroShotObjectDetection
[[autodoc]] AutoModelForZeroShotObjectDetection
Audio
The following auto classes are available for the following audio tasks.
AutoModelForAudioClassification
[[autodoc]] AutoModelForAudioClassification
AutoModelForAudioFrameClassification
[[autodoc]] TFAutoModelForAudioClassification
TFAutoModelForAudioFrameClassification
[[autodoc]] AutoModelForAudioFrameClassification
AutoModelForCTC
[[autodoc]] AutoModelForCTC
AutoModelForSpeechSeq2Seq
[[autodoc]] AutoModelForSpeechSeq2Seq
TFAutoModelForSpeechSeq2Seq
[[autodoc]] TFAutoModelForSpeechSeq2Seq
FlaxAutoModelForSpeechSeq2Seq
[[autodoc]] FlaxAutoModelForSpeechSeq2Seq
AutoModelForAudioXVector
[[autodoc]] AutoModelForAudioXVector
Multimodal
The following auto classes are available for the following multimodal tasks.
AutoModelForTableQuestionAnswering
[[autodoc]] AutoModelForTableQuestionAnswering
TFAutoModelForTableQuestionAnswering
[[autodoc]] TFAutoModelForTableQuestionAnswering
AutoModelForDocumentQuestionAnswering
[[autodoc]] AutoModelForDocumentQuestionAnswering
TFAutoModelForDocumentQuestionAnswering
[[autodoc]] TFAutoModelForDocumentQuestionAnswering
AutoModelForVisualQuestionAnswering
[[autodoc]] AutoModelForVisualQuestionAnswering
AutoModelForVision2Seq
[[autodoc]] AutoModelForVision2Seq
TFAutoModelForVision2Seq
[[autodoc]] TFAutoModelForVision2Seq
FlaxAutoModelForVision2Seq
[[autodoc]] FlaxAutoModelForVision2Seq |
GPT-J
Overview
The GPT-J model was released in the kingoflolz/mesh-transformer-jax repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like
causal language model trained on the Pile dataset.
This model was contributed by Stella Biderman.
Tips:
To load GPT-J in float32 one would need at least 2x model size
RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB
RAM to just load the model. To reduce the RAM usage there are a few options. The torch_dtype argument can be
used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights,
which could be used to further minimize the RAM usage:
thon
from transformers import GPTJForCausalLM
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
revision="float16",
torch_dtype=torch.float16,
).to(device)
The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam
optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients.
So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This
is not including the activations and data batches, which would again require some more GPU RAM. So one should explore
solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to
train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for
that could be found here
Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra
tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab
size, the tokenizer for GPT-J contains 143 extra tokens
<|extratoken_1|> <|extratoken_143|>, so the vocab_size of tokenizer also becomes 50400.
Generation
The [~generation.GenerationMixin.generate] method can be used to generate text using GPT-J
model.
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
or in float16 precision:
thon
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to(device)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with GPT-J. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Description of GPT-J.
A blog on how to Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker.
A blog on how to Accelerate GPT-J inference with DeepSpeed-Inference on GPUs.
A blog post introducing GPT-J-6B: 6B JAX-Based Transformer. ๐
A notebook for GPT-J-6B Inference Demo. ๐
Another notebook demonstrating Inference with GPT-J-6B.
Causal language modeling chapter of the ๐ค Hugging Face Course.
[GPTJForCausalLM] is supported by this causal language modeling example script, text generation example script, and notebook.
[TFGPTJForCausalLM] is supported by this causal language modeling example script and notebook.
[FlaxGPTJForCausalLM] is supported by this causal language modeling example script and notebook.
Documentation resources
- Text classification task guide
- Question answering task guide
- Causal language modeling task guide
GPTJConfig
[[autodoc]] GPTJConfig
- all
GPTJModel
[[autodoc]] GPTJModel
- forward
GPTJForCausalLM
[[autodoc]] GPTJForCausalLM
- forward
GPTJForSequenceClassification
[[autodoc]] GPTJForSequenceClassification
- forward
GPTJForQuestionAnswering
[[autodoc]] GPTJForQuestionAnswering
- forward
TFGPTJModel
[[autodoc]] TFGPTJModel
- call
TFGPTJForCausalLM
[[autodoc]] TFGPTJForCausalLM
- call
TFGPTJForSequenceClassification
[[autodoc]] TFGPTJForSequenceClassification
- call
TFGPTJForQuestionAnswering
[[autodoc]] TFGPTJForQuestionAnswering
- call
FlaxGPTJModel
[[autodoc]] FlaxGPTJModel
- call
FlaxGPTJForCausalLM
[[autodoc]] FlaxGPTJForCausalLM
- call |
ViLT
Overview
The ViLT model was proposed in ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
by Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design
for Vision-and-Language Pre-training (VLP).
The abstract from the paper is the following:
Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks.
Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision
(e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we
find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more
computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive
power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model,
Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically
simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of
times faster than previous VLP models, yet with competitive or better downstream task performance.
Tips:
The quickest way to get started with ViLT is by checking the example notebooks
(which showcase both inference and fine-tuning on custom data).
ViLT is a model that takes both pixel_values and input_ids as input. One can use [ViltProcessor] to prepare data for the model.
This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one.
ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to
under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a pixel_mask that indicates
which pixel values are real and which are padding. [ViltProcessor] automatically creates this for you.
The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes
additional embedding layers for the language modality.
ViLT architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Tips:
The PyTorch version of this model is only available in torch 1.10 and higher.
ViltConfig
[[autodoc]] ViltConfig
ViltFeatureExtractor
[[autodoc]] ViltFeatureExtractor
- call
ViltImageProcessor
[[autodoc]] ViltImageProcessor
- preprocess
ViltProcessor
[[autodoc]] ViltProcessor
- call
ViltModel
[[autodoc]] ViltModel
- forward
ViltForMaskedLM
[[autodoc]] ViltForMaskedLM
- forward
ViltForQuestionAnswering
[[autodoc]] ViltForQuestionAnswering
- forward
ViltForImagesAndTextClassification
[[autodoc]] ViltForImagesAndTextClassification
- forward
ViltForImageAndTextRetrieval
[[autodoc]] ViltForImageAndTextRetrieval
- forward
ViltForTokenClassification
[[autodoc]] ViltForTokenClassification
- forward |
MPNet
Overview
The MPNet model was proposed in MPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of
masked language modeling and permuted language modeling for natural language understanding.
The abstract from the paper is the following:
BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models.
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for
pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the
dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large
margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g.,
BERT, XLNet, RoBERTa) under the same model setting.
Tips:
MPNet doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. just
separate your segments with the separation token tokenizer.sep_token (or [sep]).
The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
MPNetConfig
[[autodoc]] MPNetConfig
MPNetTokenizer
[[autodoc]] MPNetTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
MPNetTokenizerFast
[[autodoc]] MPNetTokenizerFast
MPNetModel
[[autodoc]] MPNetModel
- forward
MPNetForMaskedLM
[[autodoc]] MPNetForMaskedLM
- forward
MPNetForSequenceClassification
[[autodoc]] MPNetForSequenceClassification
- forward
MPNetForMultipleChoice
[[autodoc]] MPNetForMultipleChoice
- forward
MPNetForTokenClassification
[[autodoc]] MPNetForTokenClassification
- forward
MPNetForQuestionAnswering
[[autodoc]] MPNetForQuestionAnswering
- forward
TFMPNetModel
[[autodoc]] TFMPNetModel
- call
TFMPNetForMaskedLM
[[autodoc]] TFMPNetForMaskedLM
- call
TFMPNetForSequenceClassification
[[autodoc]] TFMPNetForSequenceClassification
- call
TFMPNetForMultipleChoice
[[autodoc]] TFMPNetForMultipleChoice
- call
TFMPNetForTokenClassification
[[autodoc]] TFMPNetForTokenClassification
- call
TFMPNetForQuestionAnswering
[[autodoc]] TFMPNetForQuestionAnswering
- call |
EfficientNet
Overview
The EfficientNet model was proposed in EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models.
The abstract from the paper is the following:
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.
To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.
This model was contributed by adirik.
The original code can be found here.
EfficientNetConfig
[[autodoc]] EfficientNetConfig
EfficientNetImageProcessor
[[autodoc]] EfficientNetImageProcessor
- preprocess
EfficientNetModel
[[autodoc]] EfficientNetModel
- forward
EfficientNetForImageClassification
[[autodoc]] EfficientNetForImageClassification
- forward |
Hubert
Overview
Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan
Salakhutdinov, Abdelrahman Mohamed.
The abstract from the paper is the following:
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are
multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training
phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we
propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an
offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our
approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined
acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised
clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means
teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the
state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h,
10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER
reduction on the more challenging dev-other and test-other evaluation subsets.
Tips:
Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer].
This model was contributed by patrickvonplaten.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
HubertConfig
[[autodoc]] HubertConfig
HubertModel
[[autodoc]] HubertModel
- forward
HubertForCTC
[[autodoc]] HubertForCTC
- forward
HubertForSequenceClassification
[[autodoc]] HubertForSequenceClassification
- forward
TFHubertModel
[[autodoc]] TFHubertModel
- call
TFHubertForCTC
[[autodoc]] TFHubertForCTC
- call |
FlauBERT
Overview
The FlauBERT model was proposed in the paper FlauBERT: Unsupervised Language Model Pre-training for French by Hang Le et al. It's a transformer model pretrained using a masked language
modeling (MLM) objective (like BERT).
The abstract from the paper is the following:
Language models have become a key step to achieve state-of-the art results in many different Natural Language
Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way
to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their
contextualization at the sentence level. This has been widely demonstrated for English using contextualized
representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al.,
2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and
heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for
Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text
classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the
time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation
protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research
community for further reproducible experiments in French NLP.
This model was contributed by formiel. The original code can be found here.
Tips:
- Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective).
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FlaubertConfig
[[autodoc]] FlaubertConfig
FlaubertTokenizer
[[autodoc]] FlaubertTokenizer
FlaubertModel
[[autodoc]] FlaubertModel
- forward
FlaubertWithLMHeadModel
[[autodoc]] FlaubertWithLMHeadModel
- forward
FlaubertForSequenceClassification
[[autodoc]] FlaubertForSequenceClassification
- forward
FlaubertForMultipleChoice
[[autodoc]] FlaubertForMultipleChoice
- forward
FlaubertForTokenClassification
[[autodoc]] FlaubertForTokenClassification
- forward
FlaubertForQuestionAnsweringSimple
[[autodoc]] FlaubertForQuestionAnsweringSimple
- forward
FlaubertForQuestionAnswering
[[autodoc]] FlaubertForQuestionAnswering
- forward
TFFlaubertModel
[[autodoc]] TFFlaubertModel
- call
TFFlaubertWithLMHeadModel
[[autodoc]] TFFlaubertWithLMHeadModel
- call
TFFlaubertForSequenceClassification
[[autodoc]] TFFlaubertForSequenceClassification
- call
TFFlaubertForMultipleChoice
[[autodoc]] TFFlaubertForMultipleChoice
- call
TFFlaubertForTokenClassification
[[autodoc]] TFFlaubertForTokenClassification
- call
TFFlaubertForQuestionAnsweringSimple
[[autodoc]] TFFlaubertForQuestionAnsweringSimple
- call |
SqueezeBERT
Overview
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
SqueezeBERT architecture is that SqueezeBERT uses grouped convolutions
instead of fully-connected layers for the Q, K, V and FFN layers.
The abstract from the paper is the following:
Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
large computing systems, and better neural network models, natural language processing (NLP) technology has made
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
set. The SqueezeBERT code will be released.
Tips:
SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
rather than the left.
SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
with a causal language modeling (CLM) objective are better in that regard.
For best results when finetuning on sequence classification tasks, it is recommended to start with the
squeezebert/squeezebert-mnli-headless checkpoint.
This model was contributed by forresti.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
SqueezeBertConfig
[[autodoc]] SqueezeBertConfig
SqueezeBertTokenizer
[[autodoc]] SqueezeBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SqueezeBertTokenizerFast
[[autodoc]] SqueezeBertTokenizerFast
SqueezeBertModel
[[autodoc]] SqueezeBertModel
SqueezeBertForMaskedLM
[[autodoc]] SqueezeBertForMaskedLM
SqueezeBertForSequenceClassification
[[autodoc]] SqueezeBertForSequenceClassification
SqueezeBertForMultipleChoice
[[autodoc]] SqueezeBertForMultipleChoice
SqueezeBertForTokenClassification
[[autodoc]] SqueezeBertForTokenClassification
SqueezeBertForQuestionAnswering
[[autodoc]] SqueezeBertForQuestionAnswering |
GPTSAN-japanese
Overview
The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama).
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM
in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can
fine-tune for translation or summarization.
Generation
The generate() method can be used to generate text using GPTSAN-Japanese model.
thon
from transformers import AutoModel, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda()
x_tok = tokenizer("ใฏใ", prefix_text="็น็ฐไฟก้ท", return_tensors="pt")
torch.manual_seed(0)
gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20)
tokenizer.decode(gen_tok[0])
'็น็ฐไฟก้ทใฏใ2004ๅนดใซใๆฆๅฝBASARAใใฎใใใซใ่ฑ่ฃ็งๅ'
GPTSAN Features
GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models.
The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text.
GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details.
Prefix-LM Model
GPTSAN has the structure of the model named Prefix-LM in the T5 paper. (The original GPTSAN repository calls it hybrid)
In GPTSAN, the Prefix part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length.
Arbitrary lengths can also be specified differently for each batch.
This length applies to the text entered in prefix_text for the tokenizer.
The tokenizer returns the mask of the Prefix part of Prefix-LM as token_type_ids.
The model treats the part where token_type_ids is 1 as a Prefix part, that is, the input can refer to both tokens before and after.
Tips:
Specifying the Prefix part is done with a mask passed to self-attention.
When token_type_ids=None or all zero, it is equivalent to regular causal mask
for example:
x_token = tokenizer("๏ฝฑ๏ฝฒ๏ฝณ๏ฝด")
input_ids: | SOT | SEG | ๏ฝฑ | ๏ฝฒ | ๏ฝณ | ๏ฝด |
token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 0 0 0 0 0 |
SEG | 1 1 0 0 0 0 |
๏ฝฑ | 1 1 1 0 0 0 |
๏ฝฒ | 1 1 1 1 0 0 |
๏ฝณ | 1 1 1 1 1 0 |
๏ฝด | 1 1 1 1 1 1 |
x_token = tokenizer("", prefix_text="๏ฝฑ๏ฝฒ๏ฝณ๏ฝด")
input_ids: | SOT | ๏ฝฑ | ๏ฝฒ | ๏ฝณ | ๏ฝด | SEG |
token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 |
prefix_lm_mask:
SOT | 1 1 1 1 1 0 |
๏ฝฑ | 1 1 1 1 1 0 |
๏ฝฒ | 1 1 1 1 1 0 |
๏ฝณ | 1 1 1 1 1 0 |
๏ฝด | 1 1 1 1 1 0 |
SEG | 1 1 1 1 1 1 |
x_token = tokenizer("๏ฝณ๏ฝด", prefix_text="๏ฝฑ๏ฝฒ")
input_ids: | SOT | ๏ฝฑ | ๏ฝฒ | SEG | ๏ฝณ | ๏ฝด |
token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 1 1 0 0 0 |
๏ฝฑ | 1 1 1 0 0 0 |
๏ฝฒ | 1 1 1 0 0 0 |
SEG | 1 1 1 1 0 0 |
๏ฝณ | 1 1 1 1 1 0 |
๏ฝด | 1 1 1 1 1 1 |
Spout Vector
A Spout Vector is a special vector for controlling text generation.
This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens.
In the pre-trained model published from Tanrei/GPTSAN-japanese, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention.
The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions.
GPTSanJapaneseConfig
[[autodoc]] GPTSanJapaneseConfig
GPTSanJapaneseTokenizer
[[autodoc]] GPTSanJapaneseTokenizer
GPTSanJapaneseModel
[[autodoc]] GPTSanJapaneseModel
GPTSanJapaneseForConditionalGeneration
[[autodoc]] GPTSanJapaneseForConditionalGeneration
- forward |
Dilated Neighborhood Attention Transformer
Overview
DiNAT was proposed in Dilated Neighborhood Attention Transformer
by Ali Hassani and Humphrey Shi.
It extends NAT by adding a Dilated Neighborhood Attention pattern to capture global context,
and shows significant performance improvements over it.
The abstract from the paper is the following:
*Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities,
domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have
also gained significant attention, thanks to their performance and easy integration into existing frameworks.
These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA)
or Swin Transformer's Shifted Window Self Attention. While effective at reducing self attention's quadratic complexity,
local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling,
and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and
efficient extension to NA that can capture more global context and expand receptive fields exponentially at no
additional cost. NA's local attention and DiNA's sparse global attention complement each other, and therefore we
introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both.
DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt.
Our large model is faster and ahead of its Swin counterpart by 1.5% box AP in COCO object detection,
1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation.
Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.2 PQ)
and ADE20K (48.5 PQ), and instance segmentation model on Cityscapes (44.5 AP) and ADE20K (35.4 AP) (no extra data).
It also matches the state of the art specialized semantic segmentation models on ADE20K (58.2 mIoU),
and ranks second on Cityscapes (84.5 mIoU) (no extra data). *
Tips:
- One can use the [AutoImageProcessor] API to prepare images for the model.
- DiNAT can be used as a backbone. When output_hidden_states = True,
it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, height, width, num_channels).
Notes:
- DiNAT depends on NATTEN's implementation of Neighborhood Attention and Dilated Neighborhood Attention.
You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten, or build on your system by running pip install natten.
Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet.
- Patch size of 4 is only supported at the moment.
Neighborhood Attention with different dilation values.
Taken from the original paper.
This model was contributed by Ali Hassani.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with DiNAT.
[DinatForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DinatConfig
[[autodoc]] DinatConfig
DinatModel
[[autodoc]] DinatModel
- forward
DinatForImageClassification
[[autodoc]] DinatForImageClassification
- forward |
Wav2Vec2
Overview
The Wav2Vec2 model was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The abstract from the paper is the following:
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on
transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks
the speech input in the latent space and solves a contrastive task defined over a quantization of the latent
representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the
clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state
of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and
pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech
recognition with limited amounts of labeled data.
Tips:
Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer].
This model was contributed by patrickvonplaten.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on how to leverage a pretrained Wav2Vec2 model for emotion classification. ๐
[Wav2Vec2ForCTC] is supported by this example script and notebook.
Audio classification task guide
A blog post on boosting Wav2Vec2 with n-grams in ๐ค Transformers.
A blog post on how to finetune Wav2Vec2 for English ASR with ๐ค Transformers.
A blog post on finetuning XLS-R for Multi-Lingual ASR with ๐ค Transformers.
A notebook on how to create YouTube captions from any video by transcribing audio with Wav2Vec2. ๐
[Wav2Vec2ForCTC] is supported by a notebook on how to finetune a speech recognition model in English, and how to finetune a speech recognition model in any language.
Automatic speech recognition task guide
๐ Deploy
A blog post on how to deploy Wav2Vec2 for Automatic Speech Recogntion with Hugging Face's Transformers & Amazon SageMaker.
Wav2Vec2Config
[[autodoc]] Wav2Vec2Config
Wav2Vec2CTCTokenizer
[[autodoc]] Wav2Vec2CTCTokenizer
- call
- save_vocabulary
- decode
- batch_decode
- set_target_lang
Wav2Vec2FeatureExtractor
[[autodoc]] Wav2Vec2FeatureExtractor
- call
Wav2Vec2Processor
[[autodoc]] Wav2Vec2Processor
- call
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
Wav2Vec2ProcessorWithLM
[[autodoc]] Wav2Vec2ProcessorWithLM
- call
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
Decoding multiple audios
If you are planning to decode multiple batches of audios, you should consider using [~Wav2Vec2ProcessorWithLM.batch_decode] and passing an instantiated multiprocessing.Pool.
Otherwise, [~Wav2Vec2ProcessorWithLM.batch_decode] performance will be slower than calling [~Wav2Vec2ProcessorWithLM.decode] for each audio individually, as it internally instantiates a new Pool for every call. See the example below:
thon
Let's see how to use a user-managed pool for batch decoding multiple audios
from multiprocessing import get_context
from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
import model, feature extractor, tokenizer
model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda")
processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
load example dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
def map_to_array(batch):
batch["speech"] = batch["audio"]["array"]
return batch
prepare speech data for batch inference
dataset = dataset.map(map_to_array, remove_columns=["audio"])
def map_to_pred(batch, pool):
inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt")
inputs = {k: v.to("cuda") for k, v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy(), pool).text
batch["transcription"] = transcription
return batch
note: pool should be instantiated after Wav2Vec2ProcessorWithLM.
otherwise, the LM won't be available to the pool's sub-processes
select number of processes and batch_size based on number of CPU cores available and on dataset size
with get_context("fork").Pool(processes=2) as pool:
result = dataset.map(
map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"]
)
result["transcription"][:2]
['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"]
Wav2Vec2 specific outputs
[[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput
Wav2Vec2Model
[[autodoc]] Wav2Vec2Model
- forward
Wav2Vec2ForCTC
[[autodoc]] Wav2Vec2ForCTC
- forward
- load_adapter
Wav2Vec2ForSequenceClassification
[[autodoc]] Wav2Vec2ForSequenceClassification
- forward
Wav2Vec2ForAudioFrameClassification
[[autodoc]] Wav2Vec2ForAudioFrameClassification
- forward
Wav2Vec2ForXVector
[[autodoc]] Wav2Vec2ForXVector
- forward
Wav2Vec2ForPreTraining
[[autodoc]] Wav2Vec2ForPreTraining
- forward
TFWav2Vec2Model
[[autodoc]] TFWav2Vec2Model
- call
TFWav2Vec2ForSequenceClassification
[[autodoc]] TFWav2Vec2ForSequenceClassification
- call
TFWav2Vec2ForCTC
[[autodoc]] TFWav2Vec2ForCTC
- call
FlaxWav2Vec2Model
[[autodoc]] FlaxWav2Vec2Model
- call
FlaxWav2Vec2ForCTC
[[autodoc]] FlaxWav2Vec2ForCTC
- call
FlaxWav2Vec2ForPreTraining
[[autodoc]] FlaxWav2Vec2ForPreTraining
- call |
DialoGPT
Overview
DialoGPT was proposed in DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao,
Jianfeng Gao, Jingjing Liu, Bill Dolan. It's a GPT2 Model trained on 147M conversation-like exchanges extracted from
Reddit.
The abstract from the paper is the following:
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained
transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning
from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human
both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems
that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline
systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response
generation and the development of more intelligent open-domain dialogue systems.
Tips:
DialoGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
than the left.
DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful
at response generation in open-domain dialogue systems.
DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on DialoGPT's model card.
Training:
In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: We
follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language
modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,, x_N (N is the
sequence length), ended by the end-of-text token. For more information please confer to the original paper.
DialoGPT's architecture is based on the GPT2 model, so one can refer to GPT2's documentation page.
The original code can be found here. |
Mask2Former
Overview
The Mask2Former model was proposed in Masked-attention Mask Transformer for Universal Image Segmentation by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. Mask2Former is a unified framework for panoptic, instance and semantic segmentation and features significant performance and efficiency improvements over MaskFormer.
The abstract from the paper is the following:
Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice
of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).
Tips:
- Mask2Former uses the same preprocessing and postprocessing steps as MaskFormer. Use [Mask2FormerImageProcessor] or [AutoImageProcessor] to prepare images and optional targets for the model.
- To get the final segmentation, depending on the task, you can call [~Mask2FormerImageProcessor.post_process_semantic_segmentation] or [~Mask2FormerImageProcessor.post_process_instance_segmentation] or [~Mask2FormerImageProcessor.post_process_panoptic_segmentation]. All three tasks can be solved using [Mask2FormerForUniversalSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together.
Mask2Former architecture. Taken from the original paper.
This model was contributed by Shivalika Singh and Alara Dirik. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with Mask2Former.
Demo notebooks regarding inference + fine-tuning Mask2Former on custom data can be found here.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
MaskFormer specific outputs
[[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerModelOutput
[[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput
Mask2FormerConfig
[[autodoc]] Mask2FormerConfig
Mask2FormerModel
[[autodoc]] Mask2FormerModel
- forward
Mask2FormerForUniversalSegmentation
[[autodoc]] Mask2FormerForUniversalSegmentation
- forward
Mask2FormerImageProcessor
[[autodoc]] Mask2FormerImageProcessor
- preprocess
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation |
UPerNet
Overview
The UPerNet model was proposed in Unified Perceptual Parsing for Scene Understanding
by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment
a wide range of concepts from images, leveraging any vision backbone like ConvNeXt or Swin.
The abstract from the paper is the following:
Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes.
UPerNet framework. Taken from the original paper.
This model was contributed by nielsr. The original code is based on OpenMMLab's mmsegmentation here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with UPerNet.
Demo notebooks for UPerNet can be found here.
[UperNetForSemanticSegmentation] is supported by this example script and notebook.
See also: Semantic segmentation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Usage
UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so:
from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
To use another vision backbone, like ConvNeXt, simply instantiate the model with the appropriate backbone:
from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
Note that this will randomly initialize all the weights of the model.
UperNetConfig
[[autodoc]] UperNetConfig
UperNetForSemanticSegmentation
[[autodoc]] UperNetForSemanticSegmentation
- forward |
Blenderbot Small
Note that [BlenderbotSmallModel] and
[BlenderbotSmallForConditionalGeneration] are only used in combination with the checkpoint
facebook/blenderbot-90M. Larger Blenderbot checkpoints should
instead be used with [BlenderbotModel] and
[BlenderbotForConditionalGeneration]
Overview
The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.
Tips:
Blenderbot Small is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
This model was contributed by patrickvonplaten. The authors' code can be
found here.
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
BlenderbotSmallConfig
[[autodoc]] BlenderbotSmallConfig
BlenderbotSmallTokenizer
[[autodoc]] BlenderbotSmallTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
BlenderbotSmallTokenizerFast
[[autodoc]] BlenderbotSmallTokenizerFast
BlenderbotSmallModel
[[autodoc]] BlenderbotSmallModel
- forward
BlenderbotSmallForConditionalGeneration
[[autodoc]] BlenderbotSmallForConditionalGeneration
- forward
BlenderbotSmallForCausalLM
[[autodoc]] BlenderbotSmallForCausalLM
- forward
TFBlenderbotSmallModel
[[autodoc]] TFBlenderbotSmallModel
- call
TFBlenderbotSmallForConditionalGeneration
[[autodoc]] TFBlenderbotSmallForConditionalGeneration
- call
FlaxBlenderbotSmallModel
[[autodoc]] FlaxBlenderbotSmallModel
- call
- encode
- decode
FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotSmallForConditionalGeneration
- call
- encode
- decode |
Pyramid Vision Transformer (PVT)
Overview
The PVT model was proposed in
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. The PVT is a type of
vision transformer that utilizes a pyramid structure to make it an effective backbone for dense prediction tasks. Specifically
it allows for more fine-grained inputs (4 x 4 pixels per patch) to be used, while simultaneously shrinking the sequence length
of the Transformer as it deepens - reducing the computational cost. Additionally, a spatial-reduction attention (SRA) layer
is used to further reduce the resource consumption when learning high-resolution features.
The abstract from the paper is the following:
Although convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a
simpler, convolution-free backbone network useful for many dense prediction tasks. Unlike the recently proposed Vision
Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer
(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several
merits compared to current state of the arts. Different from ViT that typically yields low resolution outputs and
incurs high computational and memory costs, PVT not only can be trained on dense partitions of an image to achieve high
output resolution, which is important for dense prediction, but also uses a progressive shrinking pyramid to reduce the
computations of large feature maps. PVT inherits the advantages of both CNN and Transformer, making it a unified
backbone for various vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones.
We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including
object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet
achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope
that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research.
This model was contributed by Xrenya. The original code can be found here.
PVTv1 on ImageNet-1K
| Model variant |Size |Acc@1|Params (M)|
|--------------------|:-------:|:-------:|:------------:|
| PVT-Tiny | 224 | 75.1 | 13.2 |
| PVT-Small | 224 | 79.8 | 24.5 |
| PVT-Medium | 224 | 81.2 | 44.2 |
| PVT-Large | 224 | 81.7 | 61.4 |
PvtConfig
[[autodoc]] PvtConfig
PvtImageProcessor
[[autodoc]] PvtImageProcessor
- preprocess
PvtForImageClassification
[[autodoc]] PvtForImageClassification
- forward
PvtModel
[[autodoc]] PvtModel
- forward |
LED
Overview
The LED model was proposed in Longformer: The Long-Document Transformer by Iz
Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting
long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization
dataset.
Tips:
[LEDForConditionalGeneration] is an extension of
[BartForConditionalGeneration] exchanging the traditional self-attention layer with
Longformer's chunked self-attention layer. [LEDTokenizer] is an alias of
[BartTokenizer].
LED works very well on long-range sequence-to-sequence tasks where the input_ids largely exceed a length of
1024 tokens.
LED pads the input_ids to be a multiple of config.attention_window if required. Therefore a small speed-up is
gained, when [LEDTokenizer] is used with the pad_to_multiple_of argument.
LED makes use of global attention by means of the global_attention_mask (see
[LongformerModel]). For summarization, it is advised to put global attention only on the first
<s> token. For question answering, it is advised to put global attention on all tokens of the question.
To fine-tune LED on all 16384, gradient checkpointing can be enabled in case training leads to out-of-memory (OOM)
errors. This can be done by executing model.gradient_checkpointing_enable().
Moreover, the use_cache=False
flag can be used to disable the caching mechanism to save memory.
A notebook showing how to evaluate LED, can be accessed here.
A notebook showing how to fine-tune LED, can be accessed here.
LED is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
This model was contributed by patrickvonplaten.
Documentation resources
Text classification task guide
Question answering task guide
Translation task guide
Summarization task guide
LEDConfig
[[autodoc]] LEDConfig
LEDTokenizer
[[autodoc]] LEDTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LEDTokenizerFast
[[autodoc]] LEDTokenizerFast
LED specific outputs
[[autodoc]] models.led.modeling_led.LEDEncoderBaseModelOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqModelOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqLMOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqSequenceClassifierOutput
[[autodoc]] models.led.modeling_led.LEDSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] models.led.modeling_tf_led.TFLEDEncoderBaseModelOutput
[[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput
[[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput
LEDModel
[[autodoc]] LEDModel
- forward
LEDForConditionalGeneration
[[autodoc]] LEDForConditionalGeneration
- forward
LEDForSequenceClassification
[[autodoc]] LEDForSequenceClassification
- forward
LEDForQuestionAnswering
[[autodoc]] LEDForQuestionAnswering
- forward
TFLEDModel
[[autodoc]] TFLEDModel
- call
TFLEDForConditionalGeneration
[[autodoc]] TFLEDForConditionalGeneration
- call |
SEW
Overview
SEW (Squeezed and Efficient Wav2Vec) was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training
for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q.
Weinberger, Yoav Artzi.
The abstract from the paper is the following:
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
time, SEW reduces word error rate by 25-50% across different model sizes.
Tips:
SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
This model was contributed by anton-l.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
SEWConfig
[[autodoc]] SEWConfig
SEWModel
[[autodoc]] SEWModel
- forward
SEWForCTC
[[autodoc]] SEWForCTC
- forward
SEWForSequenceClassification
[[autodoc]] SEWForSequenceClassification
- forward |
CodeGen
Overview
The CodeGen model was proposed in A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong.
CodeGen is an autoregressive language model for program synthesis trained sequentially on The Pile, BigQuery, and BigPython.
The abstract from the paper is the following:
Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: this https URL.
This model was contributed by Hiroaki Hayashi.
The original code can be found here.
Checkpoint Naming
CodeGen model checkpoints are available on different pre-training data with variable sizes.
The format is: Salesforce/codegen-{size}-{data}, where
size: 350M, 2B, 6B, 16B
data:
nl: Pre-trained on the Pile
multi: Initialized with nl, then further pre-trained on multiple programming languages data
mono: Initialized with multi, then further pre-trained on Python data
For example, Salesforce/codegen-350M-mono offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python.
How to use
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Salesforce/codegen-350M-mono"
model = AutoModelForCausalLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
text = "def hello_world():"
completion = model.generate(**tokenizer(text, return_tensors="pt"))
print(tokenizer.decode(completion[0]))
def hello_world():
print("Hello World")
hello_world()
Documentation resources
Causal language modeling task guide
CodeGenConfig
[[autodoc]] CodeGenConfig
- all
CodeGenTokenizer
[[autodoc]] CodeGenTokenizer
- save_vocabulary
CodeGenTokenizerFast
[[autodoc]] CodeGenTokenizerFast
CodeGenModel
[[autodoc]] CodeGenModel
- forward
CodeGenForCausalLM
[[autodoc]] CodeGenForCausalLM
- forward |
XLM-ProphetNet
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview
The XLM-ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
XLM-ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of
just the next token. Its architecture is identical to ProhpetNet, but the model was trained on the multi-lingual
"wiki100" Wikipedia dump.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
The Authors' code can be found here.
Tips:
XLM-ProphetNet's model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset XGLUE.
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
XLMProphetNetConfig
[[autodoc]] XLMProphetNetConfig
XLMProphetNetTokenizer
[[autodoc]] XLMProphetNetTokenizer
XLMProphetNetModel
[[autodoc]] XLMProphetNetModel
XLMProphetNetEncoder
[[autodoc]] XLMProphetNetEncoder
XLMProphetNetDecoder
[[autodoc]] XLMProphetNetDecoder
XLMProphetNetForConditionalGeneration
[[autodoc]] XLMProphetNetForConditionalGeneration
XLMProphetNetForCausalLM
[[autodoc]] XLMProphetNetForCausalLM |
BERTweet
Overview
The BERTweet model was proposed in BERTweet: A pre-trained language model for English Tweets by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having
the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et
al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al.,
2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks:
Part-of-speech tagging, Named-entity recognition and text classification.
Example of use:
thon
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
For transformers v4.x+:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
For transformers v3.x:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = bertweet(input_ids) # Models outputs are now tuples
With TensorFlow 2.0+:
from transformers import TFAutoModel
bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
This model was contributed by dqnguyen. The original code can be found here.
BertweetTokenizer
[[autodoc]] BertweetTokenizer |
CPM
Overview
The CPM model was proposed in CPM: A Large-scale Generative Chinese Pre-trained Language Model by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin,
Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen,
Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
The abstract from the paper is the following:
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3,
with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even
zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus
of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the
Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best
of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained
language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation,
cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many
NLP tasks in the settings of few-shot (even zero-shot) learning.
This model was contributed by canwenxu. The original implementation can be found
here: https://github.com/TsinghuaAI/CPM-Generate
Note: We only have a tokenizer here, since the model architecture is the same as GPT-2.
CpmTokenizer
[[autodoc]] CpmTokenizer
CpmTokenizerFast
[[autodoc]] CpmTokenizerFast |
XLM-V
Overview
XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R).
It was introduced in the XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.
From the abstract of the XLM-V paper:
Large multilingual language models typically rely on a single vocabulary shared across 100+ languages.
As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged.
This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R.
In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by
de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity
to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically
more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V,
a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we
tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and
named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
Tips:
XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from fairseq
library had to be converted.
The XLMTokenizer implementation is used to load the vocab and performs tokenization.
A XLM-V (base size) model is available under the facebook/xlm-v-base identifier.
This model was contributed by stefan-it, including detailed experiments with XLM-V on downstream tasks.
The experiments repository can be found here. |
Funnel Transformer
Overview
The Funnel Transformer model was proposed in the paper Funnel-Transformer: Filtering out Sequential Redundancy for
Efficient Language Processing. It is a bidirectional transformer model, like
BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks
(CNN) in computer vision.
The abstract from the paper is the following:
With the success of language pretraining, it is highly desirable to develop more efficient architectures of good
scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the
much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only
require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which
gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More
importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further
improve the model capacity. In addition, to perform token-level predictions as required by common pretraining
objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence
via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on
a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading
comprehension.
Tips:
Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states.
The base model therefore has a final sequence length that is a quarter of the original one. This model can be used
directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other
tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same
sequence length as the input.
For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That's why there are two versions of each checkpoint. The version suffixed with โ-baseโ contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers.
The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be
used for [FunnelModel], [FunnelForPreTraining],
[FunnelForMaskedLM], [FunnelForTokenClassification] and
[FunnelForQuestionAnswering]. The second ones should be used for
[FunnelBaseModel], [FunnelForSequenceClassification] and
[FunnelForMultipleChoice].
This model was contributed by sgugger. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FunnelConfig
[[autodoc]] FunnelConfig
FunnelTokenizer
[[autodoc]] FunnelTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
FunnelTokenizerFast
[[autodoc]] FunnelTokenizerFast
Funnel specific outputs
[[autodoc]] models.funnel.modeling_funnel.FunnelForPreTrainingOutput
[[autodoc]] models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput
FunnelBaseModel
[[autodoc]] FunnelBaseModel
- forward
FunnelModel
[[autodoc]] FunnelModel
- forward
FunnelModelForPreTraining
[[autodoc]] FunnelForPreTraining
- forward
FunnelForMaskedLM
[[autodoc]] FunnelForMaskedLM
- forward
FunnelForSequenceClassification
[[autodoc]] FunnelForSequenceClassification
- forward
FunnelForMultipleChoice
[[autodoc]] FunnelForMultipleChoice
- forward
FunnelForTokenClassification
[[autodoc]] FunnelForTokenClassification
- forward
FunnelForQuestionAnswering
[[autodoc]] FunnelForQuestionAnswering
- forward
TFFunnelBaseModel
[[autodoc]] TFFunnelBaseModel
- call
TFFunnelModel
[[autodoc]] TFFunnelModel
- call
TFFunnelModelForPreTraining
[[autodoc]] TFFunnelForPreTraining
- call
TFFunnelForMaskedLM
[[autodoc]] TFFunnelForMaskedLM
- call
TFFunnelForSequenceClassification
[[autodoc]] TFFunnelForSequenceClassification
- call
TFFunnelForMultipleChoice
[[autodoc]] TFFunnelForMultipleChoice
- call
TFFunnelForTokenClassification
[[autodoc]] TFFunnelForTokenClassification
- call
TFFunnelForQuestionAnswering
[[autodoc]] TFFunnelForQuestionAnswering
- call |
NLLB
DISCLAIMER: The default behaviour for the tokenizer has recently been fixed (and thus changed)!
The previous version adds [self.eos_token_id, self.cur_lang_code] at the end of the token sequence for both target and source tokenization. This is wrong as the NLLB paper mentions (page 48, 6.1.1. Model Architecture) :
Note that we prefix the source sequence with the source language, as opposed to the target
language as previously done in several works (Arivazhagan et al., 2019; Johnson et al.,
2017). This is primarily because we prioritize optimizing zero-shot performance of our
model on any pair of 200 languages at a minor cost to supervised performance.
Previous behaviour:
thon
from transformers import NllbTokenizer
tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer("How was your day?").input_ids
[13374, 1398, 4260, 4039, 248130, 2, 256047]
2: ''
256047 : 'eng_Latn'
New behaviour
thon
from transformers import NllbTokenizer
tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer("How was your day?").input_ids
[256047, 13374, 1398, 4260, 4039, 248130, 2]
Enabling the old behaviour can be done as follows:
thon
from transformers import NllbTokenizer
tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour=True)
For more details, feel free to check the linked PR and Issue.
Overview of NLLB
The NLLB model was presented in No Language Left Behind: Scaling Human-Centered Machine Translation by Marta R. Costa-jussร , James Cross, Onur รelebi,
Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula,
Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews,
Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmรกn, Philipp Koehn, Alexandre Mourachko, Christophe Ropers,
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today.
However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the
200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by
first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed
at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of
Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training
improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using
a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety.
Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.
This implementation contains the dense models available on release.
The sparse model NLLB-MoE (Mixture of Expert) is now available! More details here
This model was contributed by Lysandre. The authors' code can be found here.
Generating with NLLB
While generating the target text set the forced_bos_token_id to the target language id. The following
example shows how to translate English to French using the facebook/nllb-200-distilled-600M model.
Note that we're using the BCP-47 code for French fra_Latn. See here
for the list of all BCP-47 in the Flores 200 dataset.
thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
article = "UN Chief says there is no military solution in Syria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=30
)
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie
Generating from any other language than English
English (eng_Latn) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language,
you should specify the BCP-47 code in the src_lang keyword argument of the tokenizer initialization.
See example below for a translation from romanian to german:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"facebook/nllb-200-distilled-600M", use_auth_token=True, src_lang="ron_Latn"
)
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", use_auth_token=True)
article = "ลeful ONU spune cฤ nu existฤ o soluลฃie militarฤ รฎn Siria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30
)
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
UN-Chef sagt, es gibt keine militรคrische Lรถsung in Syrien
Documentation resources
Translation task guide
Summarization task guide
NllbTokenizer
[[autodoc]] NllbTokenizer
- build_inputs_with_special_tokens
NllbTokenizerFast
[[autodoc]] NllbTokenizerFast |
Blenderbot
DISCLAIMER: If you see something strange, file a Github Issue .
Overview
The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.
Tips:
Blenderbot is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
This model was contributed by sshleifer. The authors' code can be found here .
Implementation Notes
Blenderbot uses a standard seq2seq model transformer based architecture.
Available checkpoints can be found in the model hub.
This is the default Blenderbot model class. However, some smaller checkpoints, such as
facebook/blenderbot_small_90M, have a different architecture and consequently should be used with
BlenderbotSmall.
Usage
Here is an example of model usage:
thon
from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print(tokenizer.batch_decode(reply_ids))
[" That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?"]
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
BlenderbotConfig
[[autodoc]] BlenderbotConfig
BlenderbotTokenizer
[[autodoc]] BlenderbotTokenizer
- build_inputs_with_special_tokens
BlenderbotTokenizerFast
[[autodoc]] BlenderbotTokenizerFast
- build_inputs_with_special_tokens
BlenderbotModel
See transformers.BartModel for arguments to forward and generate
[[autodoc]] BlenderbotModel
- forward
BlenderbotForConditionalGeneration
See [~transformers.BartForConditionalGeneration] for arguments to forward and generate
[[autodoc]] BlenderbotForConditionalGeneration
- forward
BlenderbotForCausalLM
[[autodoc]] BlenderbotForCausalLM
- forward
TFBlenderbotModel
[[autodoc]] TFBlenderbotModel
- call
TFBlenderbotForConditionalGeneration
[[autodoc]] TFBlenderbotForConditionalGeneration
- call
FlaxBlenderbotModel
[[autodoc]] FlaxBlenderbotModel
- call
- encode
- decode
FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotForConditionalGeneration
- call
- encode
- decode |
InstructBLIP
Overview
The InstructBLIP model was proposed in InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
InstructBLIP leverages the BLIP-2 architecture for visual instruction tuning.
The abstract from the paper is the following:
General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.
Tips:
InstructBLIP uses the same architecture as BLIP-2 with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
InstructBLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
InstructBlipConfig
[[autodoc]] InstructBlipConfig
- from_vision_qformer_text_configs
InstructBlipVisionConfig
[[autodoc]] InstructBlipVisionConfig
InstructBlipQFormerConfig
[[autodoc]] InstructBlipQFormerConfig
InstructBlipProcessor
[[autodoc]] InstructBlipProcessor
InstructBlipVisionModel
[[autodoc]] InstructBlipVisionModel
- forward
InstructBlipQFormerModel
[[autodoc]] InstructBlipQFormerModel
- forward
InstructBlipForConditionalGeneration
[[autodoc]] InstructBlipForConditionalGeneration
- forward
- generate |
BLIP-2
Overview
The BLIP-2 model was proposed in BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models by
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. BLIP-2 leverages frozen pre-trained image encoders and large language models (LLMs) by training a lightweight, 12-layer Transformer
encoder in between them, achieving state-of-the-art performance on various vision-language tasks. Most notably, BLIP-2 improves upon Flamingo, an 80 billion parameter model, by 8.7%
on zero-shot VQAv2 with 54x fewer trainable parameters.
The abstract from the paper is the following:
The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
Tips:
BLIP-2 can be used for conditional text generation given an image and an optional text prompt. At inference time, it's recommended to use the [generate] method.
One can use [Blip2Processor] to prepare images for the model, and decode the predicted tokens ID's back to text.
BLIP-2 architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with BLIP-2.
Demo notebooks for BLIP-2 for image captioning, visual question answering (VQA) and chat-like conversations can be found here.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Blip2Config
[[autodoc]] Blip2Config
- from_vision_qformer_text_configs
Blip2VisionConfig
[[autodoc]] Blip2VisionConfig
Blip2QFormerConfig
[[autodoc]] Blip2QFormerConfig
Blip2Processor
[[autodoc]] Blip2Processor
Blip2VisionModel
[[autodoc]] Blip2VisionModel
- forward
Blip2QFormerModel
[[autodoc]] Blip2QFormerModel
- forward
Blip2Model
[[autodoc]] Blip2Model
- forward
- get_text_features
- get_image_features
- get_qformer_features
Blip2ForConditionalGeneration
[[autodoc]] Blip2ForConditionalGeneration
- forward
- generate |
Wav2Vec2-Conformer
Overview
The Wav2Vec2-Conformer was added to an updated version of fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
The official results of the model can be found in Table 3 and Table 4 of the paper.
The Wav2Vec2-Conformer weights were released by the Meta AI team within the Fairseq library.
Tips:
Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the Attention-block with a Conformer-block
as introduced in Conformer: Convolution-augmented Transformer for Speech Recognition.
For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields
an improved word error rate.
Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2.
Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or
rotary position embeddings by setting the correct config.position_embeddings_type.
This model was contributed by patrickvonplaten.
The original code can be found here.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
Wav2Vec2ConformerConfig
[[autodoc]] Wav2Vec2ConformerConfig
Wav2Vec2Conformer specific outputs
[[autodoc]] models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput
Wav2Vec2ConformerModel
[[autodoc]] Wav2Vec2ConformerModel
- forward
Wav2Vec2ConformerForCTC
[[autodoc]] Wav2Vec2ConformerForCTC
- forward
Wav2Vec2ConformerForSequenceClassification
[[autodoc]] Wav2Vec2ConformerForSequenceClassification
- forward
Wav2Vec2ConformerForAudioFrameClassification
[[autodoc]] Wav2Vec2ConformerForAudioFrameClassification
- forward
Wav2Vec2ConformerForXVector
[[autodoc]] Wav2Vec2ConformerForXVector
- forward
Wav2Vec2ConformerForPreTraining
[[autodoc]] Wav2Vec2ConformerForPreTraining
- forward |
MusicGen
Overview
The MusicGen model was proposed in the paper Simple and Controllable Music Generation
by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Dรฉfossez.
MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned
on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a
sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or audio codes,
conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec,
to recover the audio waveform.
Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of
the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g.
hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass.
The abstract from the paper is the following:
We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates
over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised
of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for
cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen
can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better
controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human
studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark.
Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.
This model was contributed by sanchit-gandhi. The original code can be found
here. The pre-trained checkpoints can be found on the
Hugging Face Hub.
Generation
MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly
better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default,
and can be explicitly specified by setting do_sample=True in the call to [MusicgenForConditionalGeneration.generate],
or by overriding the model's generation config (see below).
Unconditional Generation
The inputs for unconditional (or 'null') generation can be obtained through the method
[MusicgenForConditionalGeneration.get_unconditional_inputs]:
thon
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
unconditional_inputs = model.get_unconditional_inputs(num_samples=1)
audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256)
The audio outputs are a three-dimensional Torch tensor of shape (batch_size, num_channels, sequence_length). To listen
to the generated audio samples, you can either play them in an ipynb notebook:
thon
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
Or save them as a .wav file using a third-party library, e.g. scipy:
thon
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
Text-Conditional Generation
The model can generate an audio sample conditioned on a text prompt through use of the [MusicgenProcessor] to pre-process
the inputs:
thon
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
The guidance_scale is used in classifier free guidance (CFG), setting the weighting between the conditional logits
(which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or
'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer audio quality. CFG is enabled by setting guidance_scale > 1. For best results,
use guidance_scale=3 (default).
Audio-Prompted Generation
The same [MusicgenProcessor] can be used to pre-process an audio prompt that is used for audio continuation. In the
following example, we load an audio file using the ๐ค Datasets library, which can be pip installed through the command
below:
pip install --upgrade pip
pip install datasets[audio]
thon
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
take the first half of the audio sample
sample["array"] = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
audio=sample["array"],
sampling_rate=sample["sampling_rate"],
text=["80s blues track with groovy saxophone"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
For batched audio-prompted generation, the generated audio_values can be post-processed to remove padding by using the
[MusicgenProcessor] class:
thon
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
take the first quarter of the audio sample
sample_1 = sample["array"][: len(sample["array"]) // 4]
take the first half of the audio sample
sample_2 = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
audio=[sample_1, sample_2],
sampling_rate=sample["sampling_rate"],
text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
post-process to remove padding from the batched audio
audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask)
Generation Configuration
The default parameters that control the generation process, such as sampling, guidance scale and number of generated
tokens, can be found in the model's generation config, and updated as desired:
thon
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inspect the default generation config
model.generation_config
increase the guidance scale to 4.0
model.generation_config.guidance_scale = 4.0
decrease the max length to 256 tokens
model.generation_config.max_length = 256
Note that any arguments passed to the generate method will supersede those in the generation config, so setting
do_sample=False in the call to generate will supersede the setting of model.generation_config.do_sample in the
generation config.
Model Structure
The MusicGen model can be de-composed into three distinct stages:
1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations
3. Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder
Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [MusicgenForCausalLM],
or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class
[MusicgenForConditionalGeneration].
Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder [MusicgenForCausalLM]
can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can
be combined with the frozen text encoder and audio encoder/decoders to recover the composite [MusicgenForConditionalGeneration]
model.
Below, we demonstrate how to construct the composite [MusicgenForConditionalGeneration] model from its three constituent
parts, as would typically be done following training of the MusicGen decoder LM:
thon
from transformers import AutoConfig, AutoModelForTextEncoding, AutoModel, MusicgenForCausalLM, MusicgenForConditionalGeneration
text_encoder = AutoModelForTextEncoding.from_pretrained("t5-base")
audio_encoder = AutoModel.from_pretrained("facebook/encodec_32khz")
decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder
decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config)
model = MusicgenForConditionalGeneration.from_sub_models_pretrained(text_encoder, audio_encoder, decoder)
If only the decoder needs to be loaded from the pre-trained checkpoint for the composite model, it can be loaded by first
specifying the correct config, or be accessed through the .decoder attribute of the composite model:
thon
from transformers import AutoConfig, MusicgenForCausalLM, MusicgenForConditionalGeneration
Option 1: get decoder config and pass to .from_pretrained
decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder
decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config)
Option 2: load the entire composite model, but only return the decoder
decoder = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small").decoder
Tips:
* MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model.
* Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable do_sample in the call to [MusicgenForConditionalGeneration.generate]
MusicgenDecoderConfig
[[autodoc]] MusicgenDecoderConfig
MusicgenConfig
[[autodoc]] MusicgenConfig
MusicgenProcessor
[[autodoc]] MusicgenProcessor
MusicgenModel
[[autodoc]] MusicgenModel
- forward
MusicgenForCausalLM
[[autodoc]] MusicgenForCausalLM
- forward
MusicgenForConditionalGeneration
[[autodoc]] MusicgenForConditionalGeneration
- forward |
XLS-R
Overview
The XLS-R model was proposed in XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman
Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
The abstract from the paper is the following:
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0.
We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128
languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range
of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation
benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into
English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as
VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107
language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform
English-only pretraining when translating English speech into other languages, a setting which favors monolingual
pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.
Tips:
XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
Relevant checkpoints can be found under https://huggingface.co/models?other=xls_r.
XLS-R's architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2's documentation page.
The original code can be found here. |
XLNet
Overview
The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov,
Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn
bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization
order.
The abstract from the paper is the following:
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves
better performance than pretraining approaches based on autoregressive language modeling. However, relying on
corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a
pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive
pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all
permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive
formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into
pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large
margin, including question answering, natural language inference, sentiment analysis, and document ranking.
Tips:
The specific attention pattern can be controlled at training and test time using the perm_mask input.
Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained
using only a sub-set of the output tokens as target which are selected with the target_mapping input.
To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and
target_mapping inputs to control the attention span and outputs (see examples in
examples/pytorch/text-generation/run_generation.py)
XLNet is one of the few models that has no sequence length limit.
XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,โฆ,sequence length.
XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies.
This model was contributed by thomwolf. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Multiple choice task guide
XLNetConfig
[[autodoc]] XLNetConfig
XLNetTokenizer
[[autodoc]] XLNetTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XLNetTokenizerFast
[[autodoc]] XLNetTokenizerFast
XLNet specific outputs
[[autodoc]] models.xlnet.modeling_xlnet.XLNetModelOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput
XLNetModel
[[autodoc]] XLNetModel
- forward
XLNetLMHeadModel
[[autodoc]] XLNetLMHeadModel
- forward
XLNetForSequenceClassification
[[autodoc]] XLNetForSequenceClassification
- forward
XLNetForMultipleChoice
[[autodoc]] XLNetForMultipleChoice
- forward
XLNetForTokenClassification
[[autodoc]] XLNetForTokenClassification
- forward
XLNetForQuestionAnsweringSimple
[[autodoc]] XLNetForQuestionAnsweringSimple
- forward
XLNetForQuestionAnswering
[[autodoc]] XLNetForQuestionAnswering
- forward
TFXLNetModel
[[autodoc]] TFXLNetModel
- call
TFXLNetLMHeadModel
[[autodoc]] TFXLNetLMHeadModel
- call
TFXLNetForSequenceClassification
[[autodoc]] TFXLNetForSequenceClassification
- call
TFLNetForMultipleChoice
[[autodoc]] TFXLNetForMultipleChoice
- call
TFXLNetForTokenClassification
[[autodoc]] TFXLNetForTokenClassification
- call
TFXLNetForQuestionAnsweringSimple
[[autodoc]] TFXLNetForQuestionAnsweringSimple
- call |
Pix2Struct
Overview
The Pix2Struct model was proposed in Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
The abstract from the paper is the following:
Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images.
Tips:
Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper.
We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on.
If you want to use the model to perform conditional text captioning, make sure to use the processor with add_special_tokens=False.
This model was contributed by ybelkada.
The original code can be found here.
Resources
Fine-tuning Notebook
All models
Pix2StructConfig
[[autodoc]] Pix2StructConfig
- from_text_vision_configs
Pix2StructTextConfig
[[autodoc]] Pix2StructTextConfig
Pix2StructVisionConfig
[[autodoc]] Pix2StructVisionConfig
Pix2StructProcessor
[[autodoc]] Pix2StructProcessor
Pix2StructImageProcessor
[[autodoc]] Pix2StructImageProcessor
- preprocess
Pix2StructTextModel
[[autodoc]] Pix2StructTextModel
- forward
Pix2StructVisionModel
[[autodoc]] Pix2StructVisionModel
- forward
Pix2StructForConditionalGeneration
[[autodoc]] Pix2StructForConditionalGeneration
- forward |
GPTBigCode
Overview
The GPTBigCode model was proposed in SantaCoder: don't reach for the stars! by BigCode. The listed authors are: Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo Garcรญa del Rรญo, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
The abstract from the paper is the following:uery
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at this https URL.
The model is a an optimized GPT2 model with support for Multi-Query Attention.
Technical details
The main differences compared to GPT2.
- Added support for Multi-Query Attention.
- Use gelu_pytorch_tanh instead of classic gelu.
- Avoid unnecessary synchronizations (this has since been added to GPT2 in #20061, but wasn't in the reference codebase).
- Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible).
- Merge _attn and _upcast_and_reordered_attn. Always merge the matmul with scaling. Rename reorder_and_upcast_attn->attention_softmax_in_fp32
- Cache the attention mask value to avoid recreating it every time.
- Use jit to fuse the attention fp32 casting, masking, softmax, and scaling.
- Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer.
- Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?)
- Use the memory layout (self.num_heads, 3, self.head_dim) instead of (3, self.num_heads, self.head_dim) for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible with the original gpt2 model).
You can read more about the optimizations in the original pull request
GPTBigCodeConfig
[[autodoc]] GPTBigCodeConfig
GPTBigCodeModel
[[autodoc]] GPTBigCodeModel
- forward
GPTBigCodeForCausalLM
[[autodoc]] GPTBigCodeForCausalLM
- forward
GPTBigCodeForSequenceClassification
[[autodoc]] GPTBigCodeForSequenceClassification
- forward
GPTBigCodeForTokenClassification
[[autodoc]] GPTBigCodeForTokenClassification
- forward |
LayoutLM
Overview
The LayoutLM model was proposed in the paper LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and
Ming Zhou. It's a simple but effective pretraining method of text and layout for document image understanding and
information extraction tasks, such as form understanding and receipt understanding. It obtains state-of-the-art results
on several downstream tasks:
form understanding: the FUNSD dataset (a collection of 199 annotated
forms comprising more than 30,000 words).
receipt understanding: the SROIE dataset (a collection of 626 receipts for
training and 347 receipts for testing).
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
The abstract from the paper is the following:
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the
widespread use of pretraining models for NLP applications, they almost exclusively focus on text-level manipulation,
while neglecting layout and style information that is vital for document image understanding. In this paper, we propose
the LayoutLM to jointly model interactions between text and layout information across scanned document images, which is
beneficial for a great number of real-world document image understanding tasks such as information extraction from
scanned documents. Furthermore, we also leverage image features to incorporate words' visual information into LayoutLM.
To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for
document-level pretraining. It achieves new state-of-the-art results in several downstream tasks, including form
understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification
(from 93.07 to 94.42).
Tips:
In addition to input_ids, [~transformers.LayoutLMModel.forward] also expects the input bbox, which are
the bounding boxes (i.e. 2D-positions) of the input tokens. These can be obtained using an external OCR engine such
as Google's Tesseract (there's a Python wrapper available). Each bounding box should be in (x0, y0, x1, y1) format, where
(x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the
position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000
scale. To normalize, you can use the following function:
python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs. Those can be obtained using the Python Image Library (PIL) library for example, as follows:
thon
from PIL import Image
Document can be a png, jpg, etc. PDFs must be converted to images.
image = Image.open(name_of_your_document).convert("RGB")
width, height = image.size
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with LayoutLM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on fine-tuning
LayoutLM for document-understanding using Keras & Hugging Face
Transformers.
A blog post on how to fine-tune LayoutLM for document-understanding using only Hugging Face Transformers.
A notebook on how to fine-tune LayoutLM on the FUNSD dataset with image embeddings.
See also: Document question answering task guide
A notebook on how to fine-tune LayoutLM for sequence classification on the RVL-CDIP dataset.
Text classification task guide
A notebook on how to fine-tune LayoutLM for token classification on the FUNSD dataset.
Token classification task guide
Other resources
- Masked language modeling task guide
๐ Deploy
A blog post on how to Deploy LayoutLM with Hugging Face Inference Endpoints.
LayoutLMConfig
[[autodoc]] LayoutLMConfig
LayoutLMTokenizer
[[autodoc]] LayoutLMTokenizer
LayoutLMTokenizerFast
[[autodoc]] LayoutLMTokenizerFast
LayoutLMModel
[[autodoc]] LayoutLMModel
LayoutLMForMaskedLM
[[autodoc]] LayoutLMForMaskedLM
LayoutLMForSequenceClassification
[[autodoc]] LayoutLMForSequenceClassification
LayoutLMForTokenClassification
[[autodoc]] LayoutLMForTokenClassification
LayoutLMForQuestionAnswering
[[autodoc]] LayoutLMForQuestionAnswering
TFLayoutLMModel
[[autodoc]] TFLayoutLMModel
TFLayoutLMForMaskedLM
[[autodoc]] TFLayoutLMForMaskedLM
TFLayoutLMForSequenceClassification
[[autodoc]] TFLayoutLMForSequenceClassification
TFLayoutLMForTokenClassification
[[autodoc]] TFLayoutLMForTokenClassification
TFLayoutLMForQuestionAnswering
[[autodoc]] TFLayoutLMForQuestionAnswering |
BigBird
Overview
The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
Tips:
For an in-detail explanation on how BigBird's attention works, see this blog post.
BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using
original_full is advised as there is no benefit in using block_sparse attention.
The code currently uses window size of 3 blocks and 2 global blocks.
Sequence length must be divisible by block size.
Current implementation supports only ITC.
Current implementation doesn't support num_random_blocks = 0
BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
This model was contributed by vasudevgupta. The original code can be found
here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
BigBirdConfig
[[autodoc]] BigBirdConfig
BigBirdTokenizer
[[autodoc]] BigBirdTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
BigBirdTokenizerFast
[[autodoc]] BigBirdTokenizerFast
BigBird specific outputs
[[autodoc]] models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput
BigBirdModel
[[autodoc]] BigBirdModel
- forward
BigBirdForPreTraining
[[autodoc]] BigBirdForPreTraining
- forward
BigBirdForCausalLM
[[autodoc]] BigBirdForCausalLM
- forward
BigBirdForMaskedLM
[[autodoc]] BigBirdForMaskedLM
- forward
BigBirdForSequenceClassification
[[autodoc]] BigBirdForSequenceClassification
- forward
BigBirdForMultipleChoice
[[autodoc]] BigBirdForMultipleChoice
- forward
BigBirdForTokenClassification
[[autodoc]] BigBirdForTokenClassification
- forward
BigBirdForQuestionAnswering
[[autodoc]] BigBirdForQuestionAnswering
- forward
FlaxBigBirdModel
[[autodoc]] FlaxBigBirdModel
- call
FlaxBigBirdForPreTraining
[[autodoc]] FlaxBigBirdForPreTraining
- call
FlaxBigBirdForCausalLM
[[autodoc]] FlaxBigBirdForCausalLM
- call
FlaxBigBirdForMaskedLM
[[autodoc]] FlaxBigBirdForMaskedLM
- call
FlaxBigBirdForSequenceClassification
[[autodoc]] FlaxBigBirdForSequenceClassification
- call
FlaxBigBirdForMultipleChoice
[[autodoc]] FlaxBigBirdForMultipleChoice
- call
FlaxBigBirdForTokenClassification
[[autodoc]] FlaxBigBirdForTokenClassification
- call
FlaxBigBirdForQuestionAnswering
[[autodoc]] FlaxBigBirdForQuestionAnswering
- call |
Table Transformer
Overview
The Table Transformer model was proposed in PubTables-1M: Towards comprehensive table extraction from unstructured documents by
Brandon Smock, Rohith Pesala, Robin Abraham. The authors introduce a new dataset, PubTables-1M, to benchmark progress in table extraction from unstructured documents,
as well as table structure recognition and functional analysis. The authors train 2 DETR models, one for table detection and one for table structure recognition, dubbed Table Transformers.
The abstract from the paper is the following:
Recently, significant progress has been made applying machine learning to the problem of table structure inference and extraction from unstructured documents.
However, one of the greatest challenges remains the creation of datasets with complete, unambiguous ground truth at scale. To address this, we develop a new, more
comprehensive dataset for table extraction, called PubTables-1M. PubTables-1M contains nearly one million tables from scientific articles, supports multiple input
modalities, and contains detailed header and location information for table structures, making it useful for a wide variety of modeling approaches. It also addresses a significant
source of ground truth inconsistency observed in prior datasets called oversegmentation, using a novel canonicalization procedure. We demonstrate that these improvements lead to a
significant increase in training performance and a more reliable estimate of model performance at evaluation for table structure recognition. Further, we show that transformer-based
object detection models trained on PubTables-1M produce excellent results for all three tasks of detection, structure recognition, and functional analysis without the need for any
special customization for these tasks.
Tips:
The authors released 2 models, one for table detection in documents, one for table structure recognition (the task of recognizing the individual rows, columns etc. in a table).
One can use the [AutoImageProcessor] API to prepare images and optional targets for the model. This will load a [DetrImageProcessor] behind the scenes.
Table detection and table structure recognition clarified. Taken from the original paper.
This model was contributed by nielsr. The original code can be
found here.
Resources
A demo notebook for the Table Transformer can be found here.
It turns out padding of images is quite important for detection. An interesting Github thread with replies from the authors can be found here.
TableTransformerConfig
[[autodoc]] TableTransformerConfig
TableTransformerModel
[[autodoc]] TableTransformerModel
- forward
TableTransformerForObjectDetection
[[autodoc]] TableTransformerForObjectDetection
- forward |
XLSR-Wav2Vec2
Overview
The XLSR-Wav2Vec2 model was proposed in Unsupervised Cross-Lingual Representation Learning For Speech Recognition by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael
Auli.
The abstract from the paper is the following:
This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw
waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over
masked latent speech representations and jointly learns a quantization of the latents shared across languages. The
resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly
outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction
of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to
a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong
individual models. Analysis shows that the latent discrete speech representations are shared across languages with
increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing
XLSR-53, a large model pretrained in 53 languages.
Tips:
XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2CTCTokenizer].
XLSR-Wav2Vec2's architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2's documentation page.
The original code can be found here. |
CLIP
Overview
The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP
(Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be
instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing
for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
The abstract from the paper is the following:
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This
restricted form of supervision limits their generality and usability since additional labeled data is needed to specify
any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a
much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes
with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400
million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference
learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study
the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks
such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The
model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need
for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot
without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained
model weights at this https URL.
Usage
CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
The [CLIPImageProcessor] can be used to resize (or rescale) and normalize images for the model.
The [CLIPTokenizer] is used to encode the text. The [CLIPProcessor] wraps
[CLIPImageProcessor] and [CLIPTokenizer] into a single instance to both
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
[CLIPProcessor] and [CLIPModel].
thon
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
This model was contributed by valhalla. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with CLIP.
A blog post on How to fine-tune CLIP on 10,000 image-text pairs.
CLIP is supported by this example script.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
CLIPConfig
[[autodoc]] CLIPConfig
- from_text_vision_configs
CLIPTextConfig
[[autodoc]] CLIPTextConfig
CLIPVisionConfig
[[autodoc]] CLIPVisionConfig
CLIPTokenizer
[[autodoc]] CLIPTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
CLIPTokenizerFast
[[autodoc]] CLIPTokenizerFast
CLIPImageProcessor
[[autodoc]] CLIPImageProcessor
- preprocess
CLIPFeatureExtractor
[[autodoc]] CLIPFeatureExtractor
CLIPProcessor
[[autodoc]] CLIPProcessor
CLIPModel
[[autodoc]] CLIPModel
- forward
- get_text_features
- get_image_features
CLIPTextModel
[[autodoc]] CLIPTextModel
- forward
CLIPTextModelWithProjection
[[autodoc]] CLIPTextModelWithProjection
- forward
CLIPVisionModelWithProjection
[[autodoc]] CLIPVisionModelWithProjection
- forward
CLIPVisionModel
[[autodoc]] CLIPVisionModel
- forward
TFCLIPModel
[[autodoc]] TFCLIPModel
- call
- get_text_features
- get_image_features
TFCLIPTextModel
[[autodoc]] TFCLIPTextModel
- call
TFCLIPVisionModel
[[autodoc]] TFCLIPVisionModel
- call
FlaxCLIPModel
[[autodoc]] FlaxCLIPModel
- call
- get_text_features
- get_image_features
FlaxCLIPTextModel
[[autodoc]] FlaxCLIPTextModel
- call
FlaxCLIPVisionModel
[[autodoc]] FlaxCLIPVisionModel
- call |
Vision Transformer (ViT)
Overview
The Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train.
Tips:
Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found here.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be
used for classification. The authors also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard Transformer encoder.
As the Vision Transformer expects each image to be of the same size (resolution), one can use
[ViTImageProcessor] to resize (or rescale) and normalize images for the model.
Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of
each checkpoint. For example, google/vit-base-patch16-224 refers to a base-sized architecture with patch
resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub.
The available checkpoints are either (1) pre-trained on ImageNet-21k (a collection of
14 million images and 21k classes) only, or (2) also fine-tuned on ImageNet (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to
use a higher resolution than pre-training (Touvron et al., 2019), (Kolesnikov
et al., 2020). In order to fine-tune at higher resolution, the authors perform
2D interpolation of the pre-trained position embeddings, according to their location in the original image.
The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed
an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked
language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant
improvement of 2% to training from scratch, but still 4% behind supervised pre-training.
ViT architecture. Taken from the original paper.
Following the original Vision Transformer, some follow-up works have been made:
DeiT (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers.
The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [ViTModel] or
[ViTForImageClassification]. There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224,
facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should
use [DeiTImageProcessor] in order to prepare images for the model.
BEiT (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained
vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE.
DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using
the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting
objects, without having ever been trained to do so. DINO checkpoints can be found on the hub.
MAE (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion
(75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms
supervised pre-training after fine-tuning.
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Note that we converted the weights from Ross Wightman's timm library, who already converted the weights from JAX to PyTorch. Credits
go to him!
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with ViT.
[ViTForImageClassification] is supported by this example script and notebook.
A blog on fine-tuning [ViTForImageClassification] on a custom dataset can be found here.
More demo notebooks to fine-tune [ViTForImageClassification] can be found here.
Image classification task guide
Besides that:
[ViTForMaskedImageModeling] is supported by this example script.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with ViT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTForImageClassification is supported by:
A blog post on how to Fine-Tune ViT for Image Classification with Hugging Face Transformers
A blog post on Image Classification with Hugging Face Transformers and Keras
A notebook on Fine-tuning for Image Classification with Hugging Face Transformers
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with the Hugging Face Trainer
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with PyTorch Lightning
โ๏ธ Optimization
A blog post on how to Accelerate Vision Transformer (ViT) with Quantization using Optimum
โก๏ธ Inference
A notebook on Quick demo: Vision Transformer (ViT) by Google Brain
๐ Deploy
A blog post on Deploying Tensorflow Vision Models in Hugging Face with TF Serving
A blog post on Deploying Hugging Face ViT on Vertex AI
A blog post on Deploying Hugging Face ViT on Kubernetes with TF Serving
ViTConfig
[[autodoc]] ViTConfig
ViTFeatureExtractor
[[autodoc]] ViTFeatureExtractor
- call
ViTImageProcessor
[[autodoc]] ViTImageProcessor
- preprocess
ViTModel
[[autodoc]] ViTModel
- forward
ViTForMaskedImageModeling
[[autodoc]] ViTForMaskedImageModeling
- forward
ViTForImageClassification
[[autodoc]] ViTForImageClassification
- forward
TFViTModel
[[autodoc]] TFViTModel
- call
TFViTForImageClassification
[[autodoc]] TFViTForImageClassification
- call
FlaxVitModel
[[autodoc]] FlaxViTModel
- call
FlaxViTForImageClassification
[[autodoc]] FlaxViTForImageClassification
- call |
OPT
Overview
The OPT model was proposed in Open Pre-trained Transformer Language Models by Meta AI.
OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.
The abstract from the paper is the following:
Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
Tips:
- OPT has the same architecture as [BartDecoder].
- Contrary to GPT2, OPT adds the EOS token </s> to the beginning of every prompt.
This model was contributed by Arthur Zucker, Younes Belkada, and Patrick Von Platen.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with OPT. If you're
interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on fine-tuning OPT with PEFT, bitsandbytes, and Transformers. ๐
A blog post on decoding strategies with OPT.
Causal language modeling chapter of the ๐ค Hugging Face Course.
[OPTForCausalLM] is supported by this causal language modeling example script and notebook.
[TFOPTForCausalLM] is supported by this causal language modeling example script and notebook.
[FlaxOPTForCausalLM] is supported by this causal language modeling example script.
Text classification task guide
[OPTForSequenceClassification] is supported by this example script and notebook.
[OPTForQuestionAnswering] is supported by this question answering example script and notebook.
Question answering chapter
of the ๐ค Hugging Face Course.
โก๏ธ Inference
A blog post on How ๐ค Accelerate runs very large models thanks to PyTorch with OPT.
OPTConfig
[[autodoc]] OPTConfig
OPTModel
[[autodoc]] OPTModel
- forward
OPTForCausalLM
[[autodoc]] OPTForCausalLM
- forward
TFOPTModel
[[autodoc]] TFOPTModel
- call
TFOPTForCausalLM
[[autodoc]] TFOPTForCausalLM
- call
OPTForSequenceClassification
[[autodoc]] OPTForSequenceClassification
- forward
OPTForQuestionAnswering
[[autodoc]] OPTForQuestionAnswering
- forward
FlaxOPTModel
[[autodoc]] FlaxOPTModel
- call
FlaxOPTForCausalLM
[[autodoc]] FlaxOPTForCausalLM
- call |
Graphormer
Overview
The Graphormer model was proposed in Do Transformers Really Perform Bad for Graph Representation? by
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen and Tie-Yan Liu. It is a Graph Transformer model, modified to allow computations on graphs instead of text sequences by generating embeddings and features of interest during preprocessing and collation, then using a modified attention.
The abstract from the paper is the following:
The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.
Tips:
This model will not work well on large graphs (more than 100 nodes/edges), as it will make the memory explode.
You can reduce the batch size, increase your RAM, or decrease the UNREACHABLE_NODE_DISTANCE parameter in algos_graphormer.pyx, but it will be hard to go above 700 nodes/edges.
This model does not use a tokenizer, but instead a special collator during training.
This model was contributed by clefourrier. The original code can be found here.
GraphormerConfig
[[autodoc]] GraphormerConfig
GraphormerModel
[[autodoc]] GraphormerModel
- forward
GraphormerForGraphClassification
[[autodoc]] GraphormerForGraphClassification
- forward |
ProphetNet
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview
The ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just
the next token.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
Tips:
ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
The model architecture is based on the original Transformer, but replaces the โstandardโ self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism.
The Authors' code can be found here.
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
ProphetNetConfig
[[autodoc]] ProphetNetConfig
ProphetNetTokenizer
[[autodoc]] ProphetNetTokenizer
ProphetNet specific outputs
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput
ProphetNetModel
[[autodoc]] ProphetNetModel
- forward
ProphetNetEncoder
[[autodoc]] ProphetNetEncoder
- forward
ProphetNetDecoder
[[autodoc]] ProphetNetDecoder
- forward
ProphetNetForConditionalGeneration
[[autodoc]] ProphetNetForConditionalGeneration
- forward
ProphetNetForCausalLM
[[autodoc]] ProphetNetForCausalLM
- forward |
Transformer XL
Overview
The Transformer-XL model was proposed in Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan
Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoรฏdal) embeddings which can
reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax
inputs and outputs (tied).
The abstract from the paper is the following:
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the
setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a
novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the
context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450%
longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+
times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of
bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn
Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably
coherent, novel text articles with thousands of tokens.
Tips:
Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The
original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
Transformer-XL is one of the few models that has no sequence length limit.
Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model.
Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments.
This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed.
This model was contributed by thomwolf. The original code can be found here.
TransformerXL does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Documentation resources
Text classification task guide
Causal language modeling task guide
TransfoXLConfig
[[autodoc]] TransfoXLConfig
TransfoXLTokenizer
[[autodoc]] TransfoXLTokenizer
- save_vocabulary
TransfoXL specific outputs
[[autodoc]] models.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput
[[autodoc]] models.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput
[[autodoc]] models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput
[[autodoc]] models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput
TransfoXLModel
[[autodoc]] TransfoXLModel
- forward
TransfoXLLMHeadModel
[[autodoc]] TransfoXLLMHeadModel
- forward
TransfoXLForSequenceClassification
[[autodoc]] TransfoXLForSequenceClassification
- forward
TFTransfoXLModel
[[autodoc]] TFTransfoXLModel
- call
TFTransfoXLLMHeadModel
[[autodoc]] TFTransfoXLLMHeadModel
- call
TFTransfoXLForSequenceClassification
[[autodoc]] TFTransfoXLForSequenceClassification
- call
Internal Layers
[[autodoc]] AdaptiveEmbedding
[[autodoc]] TFAdaptiveEmbedding |
Swin Transformer V2
Overview
The Swin Transformer V2 model was proposed in Swin Transformer V2: Scaling Up Capacity and Resolution by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
The abstract from the paper is the following:
Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536ร1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.
Tips:
- One can use the [AutoImageProcessor] API to prepare images for the model.
This model was contributed by nandwalritik.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with Swin Transformer v2.
[Swinv2ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
[Swinv2ForMaskedImageModeling] is supported by this example script.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Swinv2Config
[[autodoc]] Swinv2Config
Swinv2Model
[[autodoc]] Swinv2Model
- forward
Swinv2ForMaskedImageModeling
[[autodoc]] Swinv2ForMaskedImageModeling
- forward
Swinv2ForImageClassification
[[autodoc]] transformers.Swinv2ForImageClassification
- forward |
LLaMA
Overview
The LLaMA model was proposed in LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothรฉe Lacroix, Baptiste Roziรจre, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. It is a collection of foundation language models ranging from 7B to 65B parameters.
The abstract from the paper is the following:
*We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. *
Tips:
Weights for the LLaMA models can be obtained from by filling out this form
After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the conversion script. The script can be called with the following (example) command:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 65B model, it's thus 130GB of RAM needed.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
This model was contributed by zphang with contributions from BlackSamorez. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
Based on the original LLaMA model, Meta AI has released some follow-up works:
Llama2: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found here.
LlamaConfig
[[autodoc]] LlamaConfig
LlamaTokenizer
[[autodoc]] LlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LlamaTokenizerFast
[[autodoc]] LlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
LlamaModel
[[autodoc]] LlamaModel
- forward
LlamaForCausalLM
[[autodoc]] LlamaForCausalLM
- forward
LlamaForSequenceClassification
[[autodoc]] LlamaForSequenceClassification
- forward |
Vision Encoder Decoder Models
Overview
The [VisionEncoderDecoderModel] can be used to initialize an image-to-text model with any
pretrained Transformer-based vision model as the encoder (e.g. ViT, BEiT, DeiT, Swin)
and any pretrained language model as the decoder (e.g. RoBERTa, GPT2, BERT, DistilBERT).
The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for
example) TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang,
Zhoujun Li, Furu Wei.
After such a [VisionEncoderDecoderModel] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below
for more information).
An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates
the caption. Another example is optical character recognition. Refer to TrOCR, which is an instance of [VisionEncoderDecoderModel].
Randomly initializing VisionEncoderDecoderModel from model configurations.
[VisionEncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [ViTModel] configuration for the encoder
and the default [BertForCausalLM] configuration for the decoder.
thon
from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel
config_encoder = ViTConfig()
config_decoder = BertConfig()
config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = VisionEncoderDecoderModel(config=config)
Initialising VisionEncoderDecoderModel from a pretrained encoder and a pretrained decoder.
[VisionEncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, e.g. Swin, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing [VisionEncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post.
To do so, the VisionEncoderDecoderModel class provides a [VisionEncoderDecoderModel.from_encoder_decoder_pretrained] method.
thon
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"microsoft/swin-base-patch4-window7-224-in22k", "bert-base-uncased"
)
Loading an existing VisionEncoderDecoderModel checkpoint and perform inference.
To load fine-tuned checkpoints of the VisionEncoderDecoderModel class, [VisionEncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers.
To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
thon
import requests
from PIL import Image
from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel
load a fine-tuned image captioning model and corresponding tokenizer and image processor
model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
image_processor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
let's perform inference on an image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = image_processor(image, return_tensors="pt").pixel_values
autoregressively generate caption (uses greedy decoding by default)
generated_ids = model.generate(pixel_values)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
a cat laying on a blanket next to a cat laying on a bed
Loading a PyTorch checkpoint into TFVisionEncoderDecoderModel.
[TFVisionEncoderDecoderModel.from_pretrained] currently doesn't support initializing the model from a
PyTorch checkpoint. Passing from_pt=True to this method will throw an exception. If there are only PyTorch
checkpoints for a particular vision encoder-decoder model, a workaround is:
thon
from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel
_model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
_model.encoder.save_pretrained("./encoder")
_model.decoder.save_pretrained("./decoder")
model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True
)
This is only for copying some specific attributes of this particular model.
model.config = _model.config
Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: pixel_values (which are the
images) and labels (which are the input_ids of the encoded target sequence).
thon
from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel
from datasets import load_dataset
image_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"google/vit-base-patch16-224-in21k", "bert-base-uncased"
)
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
pixel_values = image_processor(image, return_tensors="pt").pixel_values
labels = tokenizer(
"an image of two cats chilling on a couch",
return_tensors="pt",
).input_ids
the forward function automatically creates the correct decoder_input_ids
loss = model(pixel_values=pixel_values, labels=labels).loss
This model was contributed by nielsr. This model's TensorFlow and Flax versions
were contributed by ydshieh.
VisionEncoderDecoderConfig
[[autodoc]] VisionEncoderDecoderConfig
VisionEncoderDecoderModel
[[autodoc]] VisionEncoderDecoderModel
- forward
- from_encoder_decoder_pretrained
TFVisionEncoderDecoderModel
[[autodoc]] TFVisionEncoderDecoderModel
- call
- from_encoder_decoder_pretrained
FlaxVisionEncoderDecoderModel
[[autodoc]] FlaxVisionEncoderDecoderModel
- call
- from_encoder_decoder_pretrained |
Data2Vec
Overview
The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.
Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images.
Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets.
The abstract from the paper is the following:
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and
objectives differ widely because they were developed with a single modality in mind. To get us closer to general
self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data based on a
masked view of the input in a selfdistillation setup using a standard Transformer architecture.
Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which
are local in nature, data2vec predicts contextualized latent representations that contain information from
the entire input. Experiments on the major benchmarks of speech recognition, image classification, and
natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.
Tips:
Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method.
For Data2VecAudio, preprocessing is identical to [Wav2Vec2Model], including feature extraction
For Data2VecText, preprocessing is identical to [RobertaModel], including tokenization.
For Data2VecVision, preprocessing is identical to [BeitModel], including feature extraction.
This model was contributed by edugp and patrickvonplaten.
sayakpaul and Rocketknight1 contributed Data2Vec for vision in TensorFlow.
The original code (for NLP and Speech) can be found here.
The original code for vision can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with Data2Vec.
[Data2VecVisionForImageClassification] is supported by this example script and notebook.
To fine-tune [TFData2VecVisionForImageClassification] on a custom dataset, see this notebook.
Data2VecText documentation resources
- Text classification task guide
- Token classification task guide
- Question answering task guide
- Causal language modeling task guide
- Masked language modeling task guide
- Multiple choice task guide
Data2VecAudio documentation resources
- Audio classification task guide
- Automatic speech recognition task guide
Data2VecVision documentation resources
- Image classification
- Semantic segmentation
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Data2VecTextConfig
[[autodoc]] Data2VecTextConfig
Data2VecAudioConfig
[[autodoc]] Data2VecAudioConfig
Data2VecVisionConfig
[[autodoc]] Data2VecVisionConfig
Data2VecAudioModel
[[autodoc]] Data2VecAudioModel
- forward
Data2VecAudioForAudioFrameClassification
[[autodoc]] Data2VecAudioForAudioFrameClassification
- forward
Data2VecAudioForCTC
[[autodoc]] Data2VecAudioForCTC
- forward
Data2VecAudioForSequenceClassification
[[autodoc]] Data2VecAudioForSequenceClassification
- forward
Data2VecAudioForXVector
[[autodoc]] Data2VecAudioForXVector
- forward
Data2VecTextModel
[[autodoc]] Data2VecTextModel
- forward
Data2VecTextForCausalLM
[[autodoc]] Data2VecTextForCausalLM
- forward
Data2VecTextForMaskedLM
[[autodoc]] Data2VecTextForMaskedLM
- forward
Data2VecTextForSequenceClassification
[[autodoc]] Data2VecTextForSequenceClassification
- forward
Data2VecTextForMultipleChoice
[[autodoc]] Data2VecTextForMultipleChoice
- forward
Data2VecTextForTokenClassification
[[autodoc]] Data2VecTextForTokenClassification
- forward
Data2VecTextForQuestionAnswering
[[autodoc]] Data2VecTextForQuestionAnswering
- forward
Data2VecVisionModel
[[autodoc]] Data2VecVisionModel
- forward
Data2VecVisionForImageClassification
[[autodoc]] Data2VecVisionForImageClassification
- forward
Data2VecVisionForSemanticSegmentation
[[autodoc]] Data2VecVisionForSemanticSegmentation
- forward
TFData2VecVisionModel
[[autodoc]] TFData2VecVisionModel
- call
TFData2VecVisionForImageClassification
[[autodoc]] TFData2VecVisionForImageClassification
- call
TFData2VecVisionForSemanticSegmentation
[[autodoc]] TFData2VecVisionForSemanticSegmentation
- call |