Search is not available for this dataset
Models
The base classes [PreTrainedModel], [TFPreTrainedModel], and
[FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which
are common among all the models to:
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model.
The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin]
(for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or
for text generation, [~generation.GenerationMixin] (for the PyTorch models),
[~generation.TFGenerationMixin] (for the TensorFlow models) and
[~generation.FlaxGenerationMixin] (for the Flax/JAX models).
PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
Large model loading
In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded.
This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only.
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it:
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
You can inspect how the model was split across devices by looking at its hf_device_map attribute:
py
t0pp.hf_device_map
python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory):
python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below.
Model Instantiation dtype
Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
either explicitly pass the desired dtype using torch_dtype argument:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto",
and then dtype will be automatically derived from the model's weights:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
Models instantiated from scratch can also be told which dtype to use with:
python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
Due to Pytorch design, this functionality is only available for floating dtypes.
ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
stringlengths 196
74k
⌀ |
---|
Speech2Text2
Overview
The Speech2Text2 model is used together with Wav2Vec2 for Speech Translation models proposed in
Large-Scale Self- and Semi-Supervised Learning for Speech Translation by
Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
Speech2Text2 is a decoder-only transformer model that can be used with any speech encoder-only, such as
Wav2Vec2 or HuBERT for Speech-to-Text tasks. Please refer to the
SpeechEncoderDecoder class on how to combine Speech2Text2 with any speech encoder-only
model.
This model was contributed by Patrick von Platen.
The original code can be found here.
Tips:
Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see
the official models .
Speech2Text2 is always used within the SpeechEncoderDecoder framework.
Speech2Text2's tokenizer is based on fastBPE.
Inference
Speech2Text2's [SpeechEncoderDecoderModel] model accepts raw waveform input values from speech and
makes use of [~generation.GenerationMixin.generate] to translate the input speech
autoregressively to the target language.
The [Wav2Vec2FeatureExtractor] class is responsible for preprocessing the input speech and
[Speech2Text2Tokenizer] decodes the generated target tokens to the target string. The
[Speech2Text2Processor] wraps [Wav2Vec2FeatureExtractor] and
[Speech2Text2Tokenizer] into a single instance to both extract the input features and decode the
predicted token ids.
Step-by-step Speech Translation
thon
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(inputs=inputs["input_values"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
Speech Translation via Pipelines
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
thon
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline(
"automatic-speech-recognition",
model="facebook/s2t-wav2vec2-large-en-de",
feature_extractor="facebook/s2t-wav2vec2-large-en-de",
)
translation_de = asr(librispeech_en[0]["file"])
See model hub to look for Speech2Text2 checkpoints.
Documentation resources
Causal language modeling task guide
Speech2Text2Config
[[autodoc]] Speech2Text2Config
Speech2TextTokenizer
[[autodoc]] Speech2Text2Tokenizer
- batch_decode
- decode
- save_vocabulary
Speech2Text2Processor
[[autodoc]] Speech2Text2Processor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
Speech2Text2ForCausalLM
[[autodoc]] Speech2Text2ForCausalLM
- forward |
Perceiver
Overview
The Perceiver IO model was proposed in Perceiver IO: A General Architecture for Structured Inputs &
Outputs by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch,
Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M.
Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
Perceiver IO is a generalization of Perceiver to handle arbitrary outputs in
addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to
classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio.
This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is
linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process
inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example,
Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs.
The abstract from the paper is the following:
The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point
clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of
inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without
sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce
outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales
linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves
strong results on tasks with highly structured output spaces, such as natural language and visual understanding,
StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT
baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art
performance on Sintel optical flow estimation.
Here's a TLDR explaining how Perceiver works:
The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale
quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512
tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set
of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't
depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are
randomly initialized, after which they are trained end-to-end using backpropagation.
Internally, [PerceiverModel] will create the latents, which is a tensor of shape (batch_size, num_latents,
d_latents). One must provide inputs (which could be text, images, audio, you name it!) to the model, which it will
use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One
can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along
the sequence dimension, and placing a linear layer on top of that to project the d_latents to num_labels.
This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up
work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The
idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the
last hidden states of the latents, using the outputs as queries, and the latents as keys and values.
So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input
length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes,
providing inputs of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the
outputs as being of shape: (batch_size, 2048, 768). Next, one performs cross-attention with the final hidden states
of the latents to update the outputs tensor. After cross-attention, one still has a tensor of shape (batch_size,
2048, 768). One can then place a regular language modeling head on top, to project the last dimension to the
vocabulary size of the model, i.e. creating logits of shape (batch_size, 2048, 262) (as Perceiver uses a vocabulary
size of 262 byte IDs).
Perceiver IO architecture. Taken from the original paper
This model was contributed by nielsr. The original code can be found
here.
Tips:
The quickest way to get started with the Perceiver is by checking the tutorial
notebooks.
Refer to the blog post if you want to fully understand how the model works and
is implemented in the library. Note that the models available in the library only showcase some examples of what you can do
with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection,
audio classification, video classification, etc.
Note:
Perceiver does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Documentation resources
Text classification task guide
Masked language modeling task guide
Image classification task guide
Perceiver specific outputs
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverModelOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverDecoderOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassifierOutput
PerceiverConfig
[[autodoc]] PerceiverConfig
PerceiverTokenizer
[[autodoc]] PerceiverTokenizer
- call
PerceiverFeatureExtractor
[[autodoc]] PerceiverFeatureExtractor
- call
PerceiverImageProcessor
[[autodoc]] PerceiverImageProcessor
- preprocess
PerceiverTextPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverTextPreprocessor
PerceiverImagePreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverImagePreprocessor
PerceiverOneHotPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor
PerceiverAudioPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor
PerceiverMultimodalPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor
PerceiverProjectionDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionDecoder
PerceiverBasicDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicDecoder
PerceiverClassificationDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationDecoder
PerceiverOpticalFlowDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder
PerceiverBasicVideoAutoencodingDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder
PerceiverMultimodalDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder
PerceiverProjectionPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor
PerceiverAudioPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor
PerceiverClassificationPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor
PerceiverMultimodalPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor
PerceiverModel
[[autodoc]] PerceiverModel
- forward
PerceiverForMaskedLM
[[autodoc]] PerceiverForMaskedLM
- forward
PerceiverForSequenceClassification
[[autodoc]] PerceiverForSequenceClassification
- forward
PerceiverForImageClassificationLearned
[[autodoc]] PerceiverForImageClassificationLearned
- forward
PerceiverForImageClassificationFourier
[[autodoc]] PerceiverForImageClassificationFourier
- forward
PerceiverForImageClassificationConvProcessing
[[autodoc]] PerceiverForImageClassificationConvProcessing
- forward
PerceiverForOpticalFlow
[[autodoc]] PerceiverForOpticalFlow
- forward
PerceiverForMultimodalAutoencoding
[[autodoc]] PerceiverForMultimodalAutoencoding
- forward |
RAG
Overview
Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and
sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate
outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing
both retrieval and generation to adapt to downstream tasks.
It is based on the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
The abstract from the paper is the following:
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve
state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely
manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind
task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge
remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric
memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a
general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained
parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a
pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages
across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks,
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation
tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art
parametric-only seq2seq baseline.
This model was contributed by ola13.
Tips:
- Retrieval-augmented generation (“RAG”) models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models. RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks.
RagConfig
[[autodoc]] RagConfig
RagTokenizer
[[autodoc]] RagTokenizer
Rag specific outputs
[[autodoc]] models.rag.modeling_rag.RetrievAugLMMarginOutput
[[autodoc]] models.rag.modeling_rag.RetrievAugLMOutput
RagRetriever
[[autodoc]] RagRetriever
RagModel
[[autodoc]] RagModel
- forward
RagSequenceForGeneration
[[autodoc]] RagSequenceForGeneration
- forward
- generate
RagTokenForGeneration
[[autodoc]] RagTokenForGeneration
- forward
- generate
TFRagModel
[[autodoc]] TFRagModel
- call
TFRagSequenceForGeneration
[[autodoc]] TFRagSequenceForGeneration
- call
- generate
TFRagTokenForGeneration
[[autodoc]] TFRagTokenForGeneration
- call
- generate |
FLAN-T5
Overview
FLAN-T5 was released in the paper Scaling Instruction-Finetuned Language Models - it is an enhanced version of T5 that has been finetuned in a mixture of tasks.
One can directly use FLAN-T5 weights without finetuning the model:
thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")
inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Pour a cup of bolognese into a large bowl and add the pasta']
FLAN-T5 includes the same improvements as T5 version 1.1 (see here for the full details of the model's improvements.)
Google has released the following variants:
google/flan-t5-small
google/flan-t5-base
google/flan-t5-large
google/flan-t5-xl
google/flan-t5-xxl.
One can refer to T5's documentation page for all tips, code examples and notebooks. As well as the FLAN-T5 model card for more details regarding training and evaluation of the model.
The original checkpoints can be found here. |
DPT
Overview
The DPT model was proposed in Vision Transformers for Dense Prediction by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
DPT is a model that leverages the Vision Transformer (ViT) as backbone for dense prediction tasks like semantic segmentation and depth estimation.
The abstract from the paper is the following:
We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art.
DPT architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT.
Demo notebooks for [DPTForDepthEstimation] can be found here.
Semantic segmentation task guide
Monocular depth estimation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DPTConfig
[[autodoc]] DPTConfig
DPTFeatureExtractor
[[autodoc]] DPTFeatureExtractor
- call
- post_process_semantic_segmentation
DPTImageProcessor
[[autodoc]] DPTImageProcessor
- preprocess
- post_process_semantic_segmentation
DPTModel
[[autodoc]] DPTModel
- forward
DPTForDepthEstimation
[[autodoc]] DPTForDepthEstimation
- forward
DPTForSemanticSegmentation
[[autodoc]] DPTForSemanticSegmentation
- forward |
M-CTC-T
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The M-CTC-T model was proposed in Pseudo-Labeling For Massively Multilingual Speech Recognition by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.
The abstract from the paper is the following:
Semi-supervised learning through pseudo-labeling has become a staple of state-of-the-art monolingual
speech recognition systems. In this work, we extend pseudo-labeling to massively multilingual speech
recognition with 60 languages. We propose a simple pseudo-labeling recipe that works well even
with low-resource languages: train a supervised multilingual model, fine-tune it with semi-supervised
learning on a target language, generate pseudo-labels for that language, and train a final model using
pseudo-labels for all languages, either from scratch or by fine-tuning. Experiments on the labeled
Common Voice and unlabeled VoxPopuli datasets show that our recipe can yield a model with better
performance for many languages that also transfers well to LibriSpeech.
This model was contributed by cwkeam. The original code can be found here.
Documentation resources
Automatic speech recognition task guide
Tips:
The PyTorch version of this model is only available in torch 1.9 and higher.
MCTCTConfig
[[autodoc]] MCTCTConfig
MCTCTFeatureExtractor
[[autodoc]] MCTCTFeatureExtractor
- call
MCTCTProcessor
[[autodoc]] MCTCTProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
MCTCTModel
[[autodoc]] MCTCTModel
- forward
MCTCTForCTC
[[autodoc]] MCTCTForCTC
- forward |
Hybrid Vision Transformer (ViT Hybrid)
Overview
The hybrid Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the plain Vision Transformer,
by leveraging a convolutional backbone (specifically, BiT) whose features are used as initial "tokens" for the Transformer.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train.
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT Hybrid.
[ViTHybridForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTHybridConfig
[[autodoc]] ViTHybridConfig
ViTHybridImageProcessor
[[autodoc]] ViTHybridImageProcessor
- preprocess
ViTHybridModel
[[autodoc]] ViTHybridModel
- forward
ViTHybridForImageClassification
[[autodoc]] ViTHybridForImageClassification
- forward |
MMS
Overview
The MMS model was proposed in Scaling Speech Technology to 1,000+ Languages
by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli
The abstract from the paper is the following:
Expanding the language coverage of speech technology has the potential to improve access to information for many more people.
However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000
languages spoken around the world.
The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task.
The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging
self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages,
a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models
for the same number of languages, as well as a language identification model for 4,017 languages.
Experiments show that our multilingual speech recognition model more than halves the word error rate of
Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.
Here are the different models open sourced in the MMS project. The models and code are originally released here. We have add them to the transformers framework, making them easier to use.
Automatic Speech Recognition (ASR)
The ASR model checkpoints can be found here : mms-1b-fl102, mms-1b-l1107, mms-1b-all. For best accuracy, use the mms-1b-all model.
Tips:
All ASR models accept a float array corresponding to the raw waveform of the speech signal. The raw waveform should be pre-processed with [Wav2Vec2FeatureExtractor].
The models were trained using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
You can load different language adapter weights for different languages via [~Wav2Vec2PreTrainedModel.load_adapter]. Language adapters only consists of roughly 2 million parameters
and can therefore be efficiently loaded on the fly when needed.
Loading
By default MMS loads adapter weights for English. If you want to load adapter weights of another language
make sure to specify target_lang=<your-chosen-target-lang> as well as "ignore_mismatched_sizes=True.
The ignore_mismatched_sizes=True keyword has to be passed to allow the language model head to be resized according
to the vocabulary of the specified language.
Similarly, the processor should be loaded with the same target language
from transformers import Wav2Vec2ForCTC, AutoProcessor
model_id = "facebook/mms-1b-all"
target_lang = "fra"
processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True)
You can safely ignore a warning such as:
text
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/mms-1b-all and are newly initialized because the shapes did not match:
- lm_head.bias: found shape torch.Size([154]) in the checkpoint and torch.Size([314]) in the model instantiated
- lm_head.weight: found shape torch.Size([154, 1280]) in the checkpoint and torch.Size([314, 1280]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
If you want to use the ASR pipeline, you can load your chosen target language as such:
from transformers import pipeline
model_id = "facebook/mms-1b-all"
target_lang = "fra"
pipe = pipeline(model=model_id, model_kwargs={"target_lang": "fra", "ignore_mismatched_sizes": True})
Inference
Next, let's look at how we can run MMS in inference and change adapter layers after having called [~PretrainedModel.from_pretrained]
First, we load audio data in different languages using the Datasets.
from datasets import load_dataset, Audio
English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
French
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "fr", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
fr_sample = next(iter(stream_data))["audio"]["array"]
Next, we load the model and processor
from transformers import Wav2Vec2ForCTC, AutoProcessor
import torch
model_id = "facebook/mms-1b-all"
processor = AutoProcessor.from_pretrained(model_id)
model = Wav2Vec2ForCTC.from_pretrained(model_id)
Now we process the audio data, pass the processed audio data to the model and transcribe the model output,
just like we usually do for [Wav2Vec2ForCTC].
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
'joe keton disapproved of films and buster also had reservations about the media'
We can now keep the same model in memory and simply switch out the language adapters by
calling the convenient [~Wav2Vec2ForCTC.load_adapter] function for the model and [~Wav2Vec2CTCTokenizer.set_target_lang] for the tokenizer.
We pass the target language as an input - "fra" for French.
processor.tokenizer.set_target_lang("fra")
model.load_adapter("fra")
inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
"ce dernier est volé tout au long de l'histoire romaine"
In the same way the language can be switched out for all other supported languages. Please have a look at:
py
processor.tokenizer.vocab.keys()
to see all supported languages.
To further improve performance from ASR models, language model decoding can be used. See the documentation here for further details.
Speech Synthesis (TTS)
Individual TTS models are available for each of the 1100+ languages. The models and inference documentation can be found here.
Language Identification (LID)
Different LID models are available based on the number of languages they can recognize - 126, 256, 512, 1024, 2048, 4017.
Inference
First, we install transformers and some other libraries
``
pip install torch accelerate torchaudio datasets
pip install --upgrade transformers
`
pip install torch datasets[audio]
Next, we load a couple of audio samples viadatasets`. Make sure that the audio data is sampled to 16000 kHz.
from datasets import load_dataset, Audio
English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
Next, we load the model and processor
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-126"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
'eng'
Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
'ara'
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
py
processor.id2label.values()
Audio Pretrained Models
Pretrained models are available for two different sizes - 300M , 1Bil. The architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2's documentation page for further details on how to finetune with models for various downstream tasks. |
ALIGN
Overview
The ALIGN model was proposed in Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with EfficientNet as its vision encoder and BERT as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe.
The abstract from the paper is the following:
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.
Usage
ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score.
[AlignProcessor] wraps [EfficientNetImageProcessor] and [BertTokenizer] into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using [AlignProcessor] and [AlignModel].
thon
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(text=candidate_labels, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
this is the image-text similarity score
logits_per_image = outputs.logits_per_image
we can take the softmax to get the label probabilities
probs = logits_per_image.softmax(dim=1)
print(probs)
This model was contributed by Alara Dirik.
The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN.
A blog post on ALIGN and the COYO-700M dataset.
A zero-shot image classification demo.
Model card of kakaobrain/align-base model.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
AlignConfig
[[autodoc]] AlignConfig
- from_text_vision_configs
AlignTextConfig
[[autodoc]] AlignTextConfig
AlignVisionConfig
[[autodoc]] AlignVisionConfig
AlignProcessor
[[autodoc]] AlignProcessor
AlignModel
[[autodoc]] AlignModel
- forward
- get_text_features
- get_image_features
AlignTextModel
[[autodoc]] AlignTextModel
- forward
AlignVisionModel
[[autodoc]] AlignVisionModel
- forward |
Big Transfer (BiT)
Overview
The BiT model was proposed in Big Transfer (BiT): General Visual Representation Learning by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
BiT is a simple recipe for scaling up pre-training of ResNet-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning.
The abstract from the paper is the following:
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
Tips:
BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by group normalization,
2) weight standardization is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant
impact on transfer learning.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT.
[BitForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
BitConfig
[[autodoc]] BitConfig
BitImageProcessor
[[autodoc]] BitImageProcessor
- preprocess
BitModel
[[autodoc]] BitModel
- forward
BitForImageClassification
[[autodoc]] BitForImageClassification
- forward |
Wav2Vec2Phoneme
Overview
The Wav2Vec2Phoneme model was proposed in Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al.,
2021 by Qiantong Xu, Alexei Baevski, Michael Auli.
The abstract from the paper is the following:
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech
recognition systems without any labeled data. However, in many cases there is labeled data available for related
languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer
learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by
mapping phonemes of the training languages to the target language using articulatory features. Experiments show that
this simple method significantly outperforms prior work which introduced task-specific architectures and used only part
of a monolingually pretrained model.
Tips:
Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2
Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2PhonemeCTCTokenizer].
Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass
to a sequence of phonemes
By default the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one
should make use of a dictionary and language model.
Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition.
This model was contributed by patrickvonplaten
The original code can be found here.
Wav2Vec2Phoneme's architecture is based on the Wav2Vec2 model, so one can refer to [Wav2Vec2]'s documentation page except for the tokenizer.
Wav2Vec2PhonemeCTCTokenizer
[[autodoc]] Wav2Vec2PhonemeCTCTokenizer
- call
- batch_decode
- decode
- phonemize |
DePlot
Overview
DePlot was proposed in the paper DePlot: One-shot visual language reasoning by plot-to-table translation from Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
The abstract of the paper states the following:
Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
Model description
DePlot is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
DePlot is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer.
Usage
Currently one checkpoint is available for DePlot:
google/deplot: DePlot fine-tuned on ChartQA dataset
thon
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")
processor = AutoProcessor.from_pretrained("google/deplot")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt")
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
Fine-tuning
To fine-tune DePlot, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence:
thon
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
``` |
TAPAS
Overview
The TAPAS model was proposed in TAPAS: Weakly Supervised Table Parsing via Pre-training
by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. It's a BERT-based model specifically
designed (and pre-trained) for answering questions about tabular data. Compared to BERT, TAPAS uses relative position embeddings and has 7
token types that encode tabular structure. TAPAS is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising
millions of tables from English Wikipedia and corresponding texts.
For question answering, TAPAS has 2 heads on top: a cell selection head and an aggregation head, for (optionally) performing aggregations (such as counting or summing) among selected cells. TAPAS has been fine-tuned on several datasets:
- SQA (Sequential Question Answering by Microsoft)
- WTQ (Wiki Table Questions by Stanford University)
- WikiSQL (by Salesforce).
It achieves state-of-the-art on both SQA and WTQ, while having comparable performance to SOTA on WikiSQL, with a much simpler architecture.
The abstract from the paper is the following:
Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.
In addition, the authors have further pre-trained TAPAS to recognize table entailment, by creating a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. The authors of TAPAS call this further pre-training intermediate pre-training (since TAPAS is first pre-trained on MLM, and then on another dataset). They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well as state-of-the-art on TabFact, a large-scale dataset with 16k Wikipedia tables for table entailment (a binary classification task). For more details, see their follow-up paper: Understanding tables with intermediate pre-training by Julian Martin Eisenschlos, Syrine Krichene and Thomas Müller.
TAPAS architecture. Taken from the original blog post.
This model was contributed by nielsr. The Tensorflow version of this model was contributed by kamalkraj. The original code can be found here.
Tips:
TAPAS is a model that uses relative position embeddings by default (restarting the position embeddings at every cell of the table). Note that this is something that was added after the publication of the original TAPAS paper. According to the authors, this usually results in a slightly better performance, and allows you to encode longer sequences without running out of embeddings. This is reflected in the reset_position_index_per_cell parameter of [TapasConfig], which is set to True by default. The default versions of the models available on the hub all use relative position embeddings. You can still use the ones with absolute position embeddings by passing in an additional argument revision="no_reset" when calling the from_pretrained() method. Note that it's usually advised to pad the inputs on the right rather than the left.
TAPAS is based on BERT, so TAPAS-base for example corresponds to a BERT-base architecture. Of course, TAPAS-large will result in the best performance (the results reported in the paper are from TAPAS-large). Results of the various sized models are shown on the original Github repository.
TAPAS has checkpoints fine-tuned on SQA, which are capable of answering questions related to a table in a conversational set-up. This means that you can ask follow-up questions such as "what is his age?" related to the previous question. Note that the forward pass of TAPAS is a bit different in case of a conversational set-up: in that case, you have to feed every table-question pair one by one to the model, such that the prev_labels token type ids can be overwritten by the predicted labels of the model to the previous question. See "Usage" section for more info.
TAPAS is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Note that TAPAS can be used as an encoder in the EncoderDecoderModel framework, to combine it with an autoregressive text decoder such as GPT-2.
Usage: fine-tuning
Here we explain how you can fine-tune [TapasForQuestionAnswering] on your own dataset.
STEP 1: Choose one of the 3 ways in which you can use TAPAS - or experiment
Basically, there are 3 different ways in which one can fine-tune [TapasForQuestionAnswering], corresponding to the different datasets on which Tapas was fine-tuned:
SQA: if you're interested in asking follow-up questions related to a table, in a conversational set-up. For example if you first ask "what's the name of the first actor?" then you can ask a follow-up question such as "how old is he?". Here, questions do not involve any aggregation (all questions are cell selection questions).
WTQ: if you're not interested in asking questions in a conversational set-up, but rather just asking questions related to a table, which might involve aggregation, such as counting a number of rows, summing up cell values or averaging cell values. You can then for example ask "what's the total number of goals Cristiano Ronaldo made in his career?". This case is also called weak supervision, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the answer to the question as supervision.
WikiSQL-supervised: this dataset is based on WikiSQL with the model being given the ground truth aggregation operator during training. This is also called strong supervision. Here, learning the appropriate aggregation operator is much easier.
To summarize:
| Task | Example dataset | Description |
|-------------------------------------|---------------------|---------------------------------------------------------------------------------------------------------|
| Conversational | SQA | Conversational, only cell selection questions |
| Weak supervision for aggregation | WTQ | Questions might involve aggregation, and the model must learn this given only the answer as supervision |
| Strong supervision for aggregation | WikiSQL-supervised | Questions might involve aggregation, and the model must learn this given the gold aggregation operator |
Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below.
from transformers import TapasConfig, TapasForQuestionAnswering
for example, the base sized model with default SQA configuration
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base")
or, the base sized model with WTQ configuration
config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
or, the base sized model with WikiSQL configuration
config = TapasConfig("google-base-finetuned-wikisql-supervised")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [TapasConfig], and then create a [TapasForQuestionAnswering] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example:
from transformers import TapasConfig, TapasForQuestionAnswering
you can initialize the classification heads any way you want (see docs of TapasConfig)
config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True)
initializing the pre-trained base sized model with our custom classification heads
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. Be sure to have installed the tensorflow_probability dependency:
from transformers import TapasConfig, TFTapasForQuestionAnswering
for example, the base sized model with default SQA configuration
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base")
or, the base sized model with WTQ configuration
config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq")
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
or, the base sized model with WikiSQL configuration
config = TapasConfig("google-base-finetuned-wikisql-supervised")
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [TapasConfig], and then create a [TFTapasForQuestionAnswering] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example:
from transformers import TapasConfig, TFTapasForQuestionAnswering
you can initialize the classification heads any way you want (see docs of TapasConfig)
config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True)
initializing the pre-trained base sized model with our custom classification heads
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
What you can also do is start from an already fine-tuned checkpoint. A note here is that the already fine-tuned checkpoint on WTQ has some issues due to the L2-loss which is somewhat brittle. See here for more info.
For a list of all pre-trained and fine-tuned TAPAS checkpoints available on HuggingFace's hub, see here.
STEP 2: Prepare your data in the SQA format
Second, no matter what you picked above, you should prepare your dataset in the SQA format. This format is a TSV/CSV file with the following columns:
id: optional, id of the table-question pair, for bookkeeping purposes.
annotator: optional, id of the person who annotated the table-question pair, for bookkeeping purposes.
position: integer indicating if the question is the first, second, third, related to the table. Only required in case of conversational setup (SQA). You don't need this column in case you're going for WTQ/WikiSQL-supervised.
question: string
table_file: string, name of a csv file containing the tabular data
answer_coordinates: list of one or more tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer)
answer_text: list of one or more strings (each string being a cell value that is part of the answer)
aggregation_label: index of the aggregation operator. Only required in case of strong supervision for aggregation (the WikiSQL-supervised case)
float_answer: the float answer to the question, if there is one (np.nan if there isn't). Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL)
The tables themselves should be present in a folder, each table being a separate csv file. Note that the authors of the TAPAS algorithm used conversion scripts with some automated logic to convert the other datasets (WTQ, WikiSQL) into the SQA format. The author explains this here. A conversion of this script that works with HuggingFace's implementation can be found here. Interestingly, these conversion scripts are not perfect (the answer_coordinates and float_answer fields are populated based on the answer_text), meaning that WTQ and WikiSQL results could actually be improved.
STEP 3: Convert your data into tensors using TapasTokenizer
Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [TapasTokenizer] to convert table-question pairs into input_ids, attention_mask, token_type_ids and so on. Again, based on which of the three cases you picked above, [TapasForQuestionAnswering] requires different
inputs to be fine-tuned:
| Task | Required inputs |
|------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Conversational | input_ids, attention_mask, token_type_ids, labels |
| Weak supervision for aggregation | input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer |
| Strong supervision for aggregation | input ids, attention mask, token type ids, labels, aggregation_labels |
[TapasTokenizer] creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here's an example:
from transformers import TapasTokenizer
import pandas as pd
model_name = "google/tapas-base"
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
"What is the name of the first actor?",
"How many movies has George Clooney played in?",
"What is the total number of movies?",
]
answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]]
answer_text = [["Brad Pitt"], ["69"], ["209"]]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
table=table,
queries=queries,
answer_coordinates=answer_coordinates,
answer_text=answer_text,
padding="max_length",
return_tensors="pt",
)
inputs
{'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]),
'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])}
Note that [TapasTokenizer] expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data.
Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches:
import torch
import pandas as pd
tsv_path = "your_path_to_the_tsv_file"
table_csv_path = "your_path_to_a_directory_containing_all_csv_files"
class TableDataset(torch.utils.data.Dataset):
def init(self, data, tokenizer):
self.data = data
self.tokenizer = tokenizer
def getitem(self, idx):
item = data.iloc[idx]
table = pd.read_csv(table_csv_path + item.table_file).astype(
str
) # be sure to make your table data text only
encoding = self.tokenizer(
table=table,
queries=item.question,
answer_coordinates=item.answer_coordinates,
answer_text=item.answer_text,
truncation=True,
padding="max_length",
return_tensors="pt",
)
# remove the batch dimension which the tokenizer adds by default
encoding = {key: val.squeeze(0) for key, val in encoding.items()}
# add the float_answer which is also required (weak supervision for aggregation case)
encoding["float_answer"] = torch.tensor(item.float_answer)
return encoding
def len(self):
return len(self.data)
data = pd.read_csv(tsv_path, sep="\t")
train_dataset = TableDataset(data, tokenizer)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)
``
</pt>
<tf>
Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [TapasTokenizer] to convert table-question pairs intoinput_ids,attention_mask,token_type_idsand so on. Again, based on which of the three cases you picked above, [TFTapasForQuestionAnswering`] requires different
inputs to be fine-tuned:
| Task | Required inputs |
|------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Conversational | input_ids, attention_mask, token_type_ids, labels |
| Weak supervision for aggregation | input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer |
| Strong supervision for aggregation | input ids, attention mask, token type ids, labels, aggregation_labels |
[TapasTokenizer] creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here's an example:
from transformers import TapasTokenizer
import pandas as pd
model_name = "google/tapas-base"
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
"What is the name of the first actor?",
"How many movies has George Clooney played in?",
"What is the total number of movies?",
]
answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]]
answer_text = [["Brad Pitt"], ["69"], ["209"]]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
table=table,
queries=queries,
answer_coordinates=answer_coordinates,
answer_text=answer_text,
padding="max_length",
return_tensors="tf",
)
inputs
{'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]),
'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])}
Note that [TapasTokenizer] expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data.
Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches:
import tensorflow as tf
import pandas as pd
tsv_path = "your_path_to_the_tsv_file"
table_csv_path = "your_path_to_a_directory_containing_all_csv_files"
class TableDataset:
def init(self, data, tokenizer):
self.data = data
self.tokenizer = tokenizer
def iter(self):
for idx in range(self.len()):
item = self.data.iloc[idx]
table = pd.read_csv(table_csv_path + item.table_file).astype(
str
) # be sure to make your table data text only
encoding = self.tokenizer(
table=table,
queries=item.question,
answer_coordinates=item.answer_coordinates,
answer_text=item.answer_text,
truncation=True,
padding="max_length",
return_tensors="tf",
)
# remove the batch dimension which the tokenizer adds by default
encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()}
# add the float_answer which is also required (weak supervision for aggregation case)
encoding["float_answer"] = tf.convert_to_tensor(item.float_answer, dtype=tf.float32)
yield encoding["input_ids"], encoding["attention_mask"], encoding["numeric_values"], encoding[
"numeric_values_scale"
], encoding["token_type_ids"], encoding["labels"], encoding["float_answer"]
def len(self):
return len(self.data)
data = pd.read_csv(tsv_path, sep="\t")
train_dataset = TableDataset(data, tokenizer)
output_signature = (
tf.TensorSpec(shape=(512,), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.float32),
tf.TensorSpec(shape=(512,), dtype=tf.float32),
tf.TensorSpec(shape=(512, 7), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.int32),
tf.TensorSpec(shape=(512,), dtype=tf.float32),
)
train_dataloader = tf.data.Dataset.from_generator(train_dataset, output_signature=output_signature).batch(32)
Note that here, we encode each table-question pair independently. This is fine as long as your dataset is not conversational. In case your dataset involves conversational questions (such as in SQA), then you should first group together the queries, answer_coordinates and answer_text per table (in the order of their position
index) and batch encode each table with its questions. This will make sure that the prev_labels token types (see docs of [TapasTokenizer]) are set correctly. See this notebook for more info. See this notebook for more info regarding using the TensorFlow model.
**STEP 4: Train (fine-tune) the model
You can then fine-tune [TapasForQuestionAnswering] as follows (shown here for the weak supervision for aggregation case):
from transformers import TapasConfig, TapasForQuestionAnswering, AdamW
this is the default WTQ configuration
config = TapasConfig(
num_aggregation_labels=4,
use_answer_as_supervision=True,
answer_loss_cutoff=0.664694,
cell_selection_preference=0.207951,
huber_loss_delta=0.121194,
init_cell_selection_weights_to_zero=True,
select_one_column=True,
allow_empty_column_selection=False,
temperature=0.0352513,
)
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
optimizer = AdamW(model.parameters(), lr=5e-5)
model.train()
for epoch in range(2): # loop over the dataset multiple times
for batch in train_dataloader:
# get the inputs;
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
token_type_ids = batch["token_type_ids"]
labels = batch["labels"]
numeric_values = batch["numeric_values"]
numeric_values_scale = batch["numeric_values_scale"]
float_answer = batch["float_answer"]
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
labels=labels,
numeric_values=numeric_values,
numeric_values_scale=numeric_values_scale,
float_answer=float_answer,
)
loss = outputs.loss
loss.backward()
optimizer.step()
``
</pt>
<tf>
You can then fine-tune [TFTapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case):
import tensorflow as tf
from transformers import TapasConfig, TFTapasForQuestionAnswering
this is the default WTQ configuration
config = TapasConfig(
num_aggregation_labels=4,
use_answer_as_supervision=True,
answer_loss_cutoff=0.664694,
cell_selection_preference=0.207951,
huber_loss_delta=0.121194,
init_cell_selection_weights_to_zero=True,
select_one_column=True,
allow_empty_column_selection=False,
temperature=0.0352513,
)
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
for epoch in range(2): # loop over the dataset multiple times
for batch in train_dataloader:
# get the inputs;
input_ids = batch[0]
attention_mask = batch[1]
token_type_ids = batch[4]
labels = batch[-1]
numeric_values = batch[2]
numeric_values_scale = batch[3]
float_answer = batch[6]
# forward + backward + optimize
with tf.GradientTape() as tape:
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
labels=labels,
numeric_values=numeric_values,
numeric_values_scale=numeric_values_scale,
float_answer=float_answer,
)
grads = tape.gradient(outputs.loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
Usage: inference
Here we explain how you can use [TapasForQuestionAnswering] or [TFTapasForQuestionAnswering] for inference (i.e. making predictions on new data). For inference, only input_ids, attention_mask and token_type_ids (which you can obtain using [TapasTokenizer]) have to be provided to the model to obtain the logits. Next, you can use the handy [~models.tapas.tokenization_tapas.convert_logits_to_predictions] method to convert these into predicted coordinates and optional aggregation indices.
However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that:
from transformers import TapasTokenizer, TapasForQuestionAnswering
import pandas as pd
model_name = "google/tapas-base-finetuned-wtq"
model = TapasForQuestionAnswering.from_pretrained(model_name)
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
"What is the name of the first actor?",
"How many movies has George Clooney played in?",
"What is the total number of movies?",
]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt")
outputs = model(**inputs)
predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
inputs, outputs.logits.detach(), outputs.logits_aggregation.detach()
)
let's print out the results:
id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}
aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices]
answers = []
for coordinates in predicted_answer_coordinates:
if len(coordinates) == 1:
# only a single cell:
answers.append(table.iat[coordinates[0]])
else:
# multiple cells
cell_values = []
for coordinate in coordinates:
cell_values.append(table.iat[coordinate])
answers.append(", ".join(cell_values))
display(table)
print("")
for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string):
print(query)
if predicted_agg == "NONE":
print("Predicted answer: " + answer)
else:
print("Predicted answer: " + predicted_agg + " > " + answer)
What is the name of the first actor?
Predicted answer: Brad Pitt
How many movies has George Clooney played in?
Predicted answer: COUNT > 69
What is the total number of movies?
Predicted answer: SUM > 87, 53, 69
``
</pt>
<tf>
Here we explain how you can use [TFTapasForQuestionAnswering] for inference (i.e. making predictions on new data). For inference, onlyinput_ids,attention_maskandtoken_type_ids(which you can obtain using [TapasTokenizer]) have to be provided to the model to obtain the logits. Next, you can use the handy [~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices.
However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that:
from transformers import TapasTokenizer, TFTapasForQuestionAnswering
import pandas as pd
model_name = "google/tapas-base-finetuned-wtq"
model = TFTapasForQuestionAnswering.from_pretrained(model_name)
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
"What is the name of the first actor?",
"How many movies has George Clooney played in?",
"What is the total number of movies?",
]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf")
outputs = model(**inputs)
predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
inputs, outputs.logits, outputs.logits_aggregation
)
let's print out the results:
id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}
aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices]
answers = []
for coordinates in predicted_answer_coordinates:
if len(coordinates) == 1:
# only a single cell:
answers.append(table.iat[coordinates[0]])
else:
# multiple cells
cell_values = []
for coordinate in coordinates:
cell_values.append(table.iat[coordinate])
answers.append(", ".join(cell_values))
display(table)
print("")
for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string):
print(query)
if predicted_agg == "NONE":
print("Predicted answer: " + answer)
else:
print("Predicted answer: " + predicted_agg + " > " + answer)
What is the name of the first actor?
Predicted answer: Brad Pitt
How many movies has George Clooney played in?
Predicted answer: COUNT > 69
What is the total number of movies?
Predicted answer: SUM > 87, 53, 69
In case of a conversational set-up, then each table-question pair must be provided sequentially to the model, such that the prev_labels token types can be overwritten by the predicted labels of the previous table-question pair. Again, more info can be found in this notebook (for PyTorch) and this notebook (for TensorFlow).
Documentation resources
Text classification task guide
Masked language modeling task guide
TAPAS specific outputs
[[autodoc]] models.tapas.modeling_tapas.TableQuestionAnsweringOutput
TapasConfig
[[autodoc]] TapasConfig
TapasTokenizer
[[autodoc]] TapasTokenizer
- call
- convert_logits_to_predictions
- save_vocabulary
TapasModel
[[autodoc]] TapasModel
- forward
TapasForMaskedLM
[[autodoc]] TapasForMaskedLM
- forward
TapasForSequenceClassification
[[autodoc]] TapasForSequenceClassification
- forward
TapasForQuestionAnswering
[[autodoc]] TapasForQuestionAnswering
- forward
TFTapasModel
[[autodoc]] TFTapasModel
- call
TFTapasForMaskedLM
[[autodoc]] TFTapasForMaskedLM
- call
TFTapasForSequenceClassification
[[autodoc]] TFTapasForSequenceClassification
- call
TFTapasForQuestionAnswering
[[autodoc]] TFTapasForQuestionAnswering
- call |
PEGASUS-X
Overview
The PEGASUS-X model was proposed in Investigating Efficiently Extending Transformers for Long Input Summarization by Jason Phang, Yao Zhao and Peter J. Liu.
PEGASUS-X (PEGASUS eXtended) extends the PEGASUS models for long input summarization through additional long input pretraining and using staggered block-local attention with global tokens in the encoder.
The abstract from the paper is the following:
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.
Tips:
PEGASUS-X uses the same tokenizer as PEGASUS.
This model was contributed by zphang. The original code can be found here.
Documentation resources
Translation task guide
Summarization task guide
PegasusXConfig
[[autodoc]] PegasusXConfig
PegasusXModel
[[autodoc]] PegasusXModel
- forward
PegasusXForConditionalGeneration
[[autodoc]] PegasusXForConditionalGeneration
- forward |
Swin Transformer
Overview
The Swin Transformer was proposed in Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
The abstract from the paper is the following:
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone
for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains,
such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text.
To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted
\bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping
local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at
various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it
compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense
prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation
(53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and
+2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.
The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.
Tips:
- One can use the [AutoImageProcessor] API to prepare images for the model.
- Swin pads the inputs supporting any input height and width (if divisible by 32).
- Swin can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, sequence_length, num_channels).
Swin Transformer architecture. Taken from the original paper.
This model was contributed by novice03. The Tensorflow version of this model was contributed by amyeroberts. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer.
[SwinForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
[SwinForMaskedImageModeling] is supported by this example script.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SwinConfig
[[autodoc]] SwinConfig
SwinModel
[[autodoc]] SwinModel
- forward
SwinForMaskedImageModeling
[[autodoc]] SwinForMaskedImageModeling
- forward
SwinForImageClassification
[[autodoc]] transformers.SwinForImageClassification
- forward
TFSwinModel
[[autodoc]] TFSwinModel
- call
TFSwinForMaskedImageModeling
[[autodoc]] TFSwinForMaskedImageModeling
- call
TFSwinForImageClassification
[[autodoc]] transformers.TFSwinForImageClassification
- call |
GPT-NeoX-Japanese
Overview
We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of https://github.com/EleutherAI/gpt-neox.
Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts.
To address this distinct structure of the Japanese language, we use a special sub-word tokenizer. We are very grateful to tanreinama for open-sourcing this incredibly helpful tokenizer.
Following the recommendations from Google's research on PaLM, we have removed bias parameters from transformer blocks, achieving better model performance. Please refer this article in detail.
Development of the model was led by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori from ABEJA, Inc.. For more information on this model-building activity, please refer here (ja).
Generation
The generate() method can be used to generate text using GPT NeoX Japanese model.
thon
from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer
model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b")
tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
prompt = "人とAIが協調するためには、"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0]
print(gen_text)
人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。
Documentation resources
Causal language modeling task guide
GPTNeoXJapaneseConfig
[[autodoc]] GPTNeoXJapaneseConfig
GPTNeoXJapaneseTokenizer
[[autodoc]] GPTNeoXJapaneseTokenizer
GPTNeoXJapaneseModel
[[autodoc]] GPTNeoXJapaneseModel
- forward
GPTNeoXJapaneseForCausalLM
[[autodoc]] GPTNeoXJapaneseForCausalLM
- forward |
LayoutXLM
Overview
LayoutXLM was proposed in LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha
Zhang, Furu Wei. It's a multilingual extension of the LayoutLMv2 model trained
on 53 languages.
The abstract from the paper is the following:
Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document
understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In
this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to
bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also
introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in
7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled
for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA
cross-lingual pre-trained models on the XFUN dataset.
One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so:
thon
from transformers import LayoutLMv2Model
model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")
Note that LayoutXLM has its own tokenizer, based on
[LayoutXLMTokenizer]/[LayoutXLMTokenizerFast]. You can initialize it as
follows:
thon
from transformers import LayoutXLMTokenizer
tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base")
Similar to LayoutLMv2, you can use [LayoutXLMProcessor] (which internally applies
[LayoutLMv2ImageProcessor] and
[LayoutXLMTokenizer]/[LayoutXLMTokenizerFast] in sequence) to prepare all
data for the model.
As LayoutXLM's architecture is equivalent to that of LayoutLMv2, one can refer to LayoutLMv2's documentation page for all tips, code examples and notebooks.
This model was contributed by nielsr. The original code can be found here.
LayoutXLMTokenizer
[[autodoc]] LayoutXLMTokenizer
- call
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LayoutXLMTokenizerFast
[[autodoc]] LayoutXLMTokenizerFast
- call
LayoutXLMProcessor
[[autodoc]] LayoutXLMProcessor
- call |
CPMAnt
Overview
CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. See more
Tips:
This model was contributed by OpenBMB. The original code can be found here.
⚙️ Training & Inference
- A tutorial on CPM-Live.
CpmAntConfig
[[autodoc]] CpmAntConfig
- all
CpmAntTokenizer
[[autodoc]] CpmAntTokenizer
- all
CpmAntModel
[[autodoc]] CpmAntModel
- all
CpmAntForCausalLM
[[autodoc]] CpmAntForCausalLM
- all |
PhoBERT
Overview
The PhoBERT model was proposed in PhoBERT: Pre-trained language models for Vietnamese by Dat Quoc Nguyen, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual
language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent
best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple
Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and
Natural language inference.
Example of use:
thon
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
With TensorFlow 2.0+:
from transformers import TFAutoModel
phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
This model was contributed by dqnguyen. The original code can be found here.
PhobertTokenizer
[[autodoc]] PhobertTokenizer |
Chinese-CLIP
Overview
The Chinese-CLIP model was proposed in Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. The original Chinese-CLIP code is released at this link.
The abstract from the paper is the following:
The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Our codes, pretrained models, and demos have been released.
Usage
The code snippet below shows how to compute image & text features and similarities:
thon
from PIL import Image
import requests
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
Squirtle, Bulbasaur, Charmander, Pikachu in English
texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
compute image feature
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
compute text features
inputs = processor(text=texts, padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
compute image-text similarity scores
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]]
Currently, we release the following scales of pretrained Chinese-CLIP models at HF Model Hub:
OFA-Sys/chinese-clip-vit-base-patch16
OFA-Sys/chinese-clip-vit-large-patch14
OFA-Sys/chinese-clip-vit-large-patch14-336px
OFA-Sys/chinese-clip-vit-huge-patch14
The Chinese-CLIP model was contributed by OFA-Sys.
ChineseCLIPConfig
[[autodoc]] ChineseCLIPConfig
- from_text_vision_configs
ChineseCLIPTextConfig
[[autodoc]] ChineseCLIPTextConfig
ChineseCLIPVisionConfig
[[autodoc]] ChineseCLIPVisionConfig
ChineseCLIPImageProcessor
[[autodoc]] ChineseCLIPImageProcessor
- preprocess
ChineseCLIPFeatureExtractor
[[autodoc]] ChineseCLIPFeatureExtractor
ChineseCLIPProcessor
[[autodoc]] ChineseCLIPProcessor
ChineseCLIPModel
[[autodoc]] ChineseCLIPModel
- forward
- get_text_features
- get_image_features
ChineseCLIPTextModel
[[autodoc]] ChineseCLIPTextModel
- forward
ChineseCLIPVisionModel
[[autodoc]] ChineseCLIPVisionModel
- forward |
MegatronBERT
Overview
The MegatronBERT model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).
Tips:
We have provided pretrained BERT-345M checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the NGC documentation.
Alternatively, you can directly download the checkpoints using:
BERT-345M-uncased:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip
-O megatron_bert_345m_v0_1_uncased.zip
BERT-345M-cased:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O
megatron_bert_345m_v0_1_cased.zip
Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will
easily be loaded by Hugging Face Transformers and our port of the BERT code.
The following commands allow you to do the conversion. We assume that the folder models/megatron_bert contains
megatron_bert_345m_v0_1_{cased, uncased}.zip and that the commands are run from inside that folder:
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip
This model was contributed by jdemouth. The original code can be found here. That repository contains a multi-GPU and multi-node implementation of the
Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and
"pipeline parallel" techniques.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
MegatronBertConfig
[[autodoc]] MegatronBertConfig
MegatronBertModel
[[autodoc]] MegatronBertModel
- forward
MegatronBertForMaskedLM
[[autodoc]] MegatronBertForMaskedLM
- forward
MegatronBertForCausalLM
[[autodoc]] MegatronBertForCausalLM
- forward
MegatronBertForNextSentencePrediction
[[autodoc]] MegatronBertForNextSentencePrediction
- forward
MegatronBertForPreTraining
[[autodoc]] MegatronBertForPreTraining
- forward
MegatronBertForSequenceClassification
[[autodoc]] MegatronBertForSequenceClassification
- forward
MegatronBertForMultipleChoice
[[autodoc]] MegatronBertForMultipleChoice
- forward
MegatronBertForTokenClassification
[[autodoc]] MegatronBertForTokenClassification
- forward
MegatronBertForQuestionAnswering
[[autodoc]] MegatronBertForQuestionAnswering
- forward |
MarianMT
Bugs: If you see something strange, file a Github Issue
and assign @patrickvonplaten.
Translations should be similar, but not identical to output in the test set linked to in each model card.
Tips:
A framework for translation models, using the same models as BART.
Implementation Notes
Each model is about 298 MB on disk, there are more than 1,000 models.
The list of supported language pairs can be found here.
Models were originally trained by Jörg Tiedemann using the Marian C++ library, which supports fast training and translation.
All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented
in a model card.
The 80 opus models that require BPE preprocessing are not supported.
The modeling code is the same as [BartForConditionalGeneration] with a few minor modifications:
static (sinusoid) positional embeddings (MarianConfig.static_position_embeddings=True)
no layernorm_embedding (MarianConfig.normalize_embedding=False)
the model starts generating with pad_token_id (which has 0 as a token_embedding) as the prefix (Bart uses
<s/>),
Code to bulk convert models can be found in convert_marian_to_pytorch.py.
This model was contributed by sshleifer.
Naming
All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt}
The language codes used to name models are inconsistent. Two digit codes can usually be found here, three digit codes require googling "language
code {code}".
Codes formatted like es_AR are usually code_{region}. That one is Spanish from Argentina.
The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second
group use a combination of ISO-639-5 codes and ISO-639-2 codes.
Examples
Since Marian models are smaller than many other translation models available in the library, they can be useful for
fine-tuning experiments and integration tests.
Fine-tune on GPU
Multilingual Models
All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt}:
If a model can output multiple languages, and you should specify a language code by prepending the desired output
language to the src_text.
You can see a models's supported language codes in its model card, under target constituents, like in opus-mt-en-roa.
Note that if a model is only multilingual on the source side, like Helsinki-NLP/opus-mt-roa-en, no language
codes are required.
New multi-lingual models from the Tatoeba-Challenge repo
require 3 character language codes:
thon
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>fra<< this is a sentence in english that we want to translate to french",
">>por<< This should go to portuguese",
">>esp<< And this to Spanish",
]
model_name = "Helsinki-NLP/opus-mt-en-roa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
print(tokenizer.supported_language_codes)
['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<']
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
[tokenizer.decode(t, skip_special_tokens=True) for t in translated]
["c'est une phrase en anglais que nous voulons traduire en français",
'Isto deve ir para o português.',
'Y esto al español']
Here is the code to see all available pretrained models on the hub:
thon
from huggingface_hub import list_models
model_list = list_models()
org = "Helsinki-NLP"
model_ids = [x.modelId for x in model_list if x.modelId.startswith(org)]
suffix = [x.split("/")[1] for x in model_ids]
old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()]
Old Style Multi-Lingual Models
These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language
group:
python no-style
['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU',
'Helsinki-NLP/opus-mt-ROMANCE-en',
'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA',
'Helsinki-NLP/opus-mt-de-ZH',
'Helsinki-NLP/opus-mt-en-CELTIC',
'Helsinki-NLP/opus-mt-en-ROMANCE',
'Helsinki-NLP/opus-mt-es-NORWAY',
'Helsinki-NLP/opus-mt-fi-NORWAY',
'Helsinki-NLP/opus-mt-fi-ZH',
'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI',
'Helsinki-NLP/opus-mt-sv-NORWAY',
'Helsinki-NLP/opus-mt-sv-ZH']
GROUP_MEMBERS = {
'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],
'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],
'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],
'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],
'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']
}
Example of translating english to many romance languages, using old-style 2 character language codes
thon
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>fr<< this is a sentence in english that we want to translate to french",
">>pt<< This should go to portuguese",
">>es<< And this to Spanish",
]
model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
["c'est une phrase en anglais que nous voulons traduire en français",
'Isto deve ir para o português.',
'Y esto al español']
Documentation resources
Translation task guide
Summarization task guide
Causal language modeling task guide
MarianConfig
[[autodoc]] MarianConfig
MarianTokenizer
[[autodoc]] MarianTokenizer
- build_inputs_with_special_tokens
MarianModel
[[autodoc]] MarianModel
- forward
MarianMTModel
[[autodoc]] MarianMTModel
- forward
MarianForCausalLM
[[autodoc]] MarianForCausalLM
- forward
TFMarianModel
[[autodoc]] TFMarianModel
- call
TFMarianMTModel
[[autodoc]] TFMarianMTModel
- call
FlaxMarianModel
[[autodoc]] FlaxMarianModel
- call
FlaxMarianMTModel
[[autodoc]] FlaxMarianMTModel
- call |
Pegasus
DISCLAIMER: If you see something strange, file a Github Issue
and assign @patrickvonplaten.
Overview
The Pegasus model was proposed in PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
According to the abstract,
Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
input document and are generated together as one output sequence from the remaining sentences, similar to an
extractive summary.
Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval.
This model was contributed by sshleifer. The Authors' code can be found here.
Tips:
Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pretraining objective, called Gap Sentence Generation (GSG).
MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT)
GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder.
Checkpoints
All the checkpoints are fine-tuned for summarization, besides
pegasus-large, whence the other checkpoints are fine-tuned:
Each checkpoint is 2.2 GB on disk and 568M parameters.
FP16 is not supported (help/ideas on this appreciated!).
Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU.
Full replication results and correctly pre-processed data can be found in this Issue.
Distilled checkpoints are described in this paper.
Examples
Script to fine-tune pegasus
on the XSUM dataset. Data download instructions at examples/pytorch/summarization/.
FP16 is not supported (help/ideas on this appreciated!).
The adafactor optimizer is recommended for pegasus fine-tuning.
Implementation Notes
All models are transformer encoder-decoders with 16 layers in each component.
The implementation is completely inherited from [BartForConditionalGeneration]
Some key configuration differences:
static, sinusoidal position embeddings
the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix.
more beams are used (num_beams=8)
All pretrained pegasus checkpoints are the same besides three attributes: tokenizer.model_max_length (maximum
input size), max_length (the maximum number of tokens to generate) and length_penalty.
The code to convert checkpoints trained in the author's repo can be
found in convert_pegasus_tf_to_pytorch.py.
Usage Example
thon
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = "google/pegasus-xsum"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
batch = tokenizer(src_text, truncation=True, padding="longest", return_tensors="pt").to(device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert (
tgt_text[0]
== "California's largest electricity provider has turned off power to hundreds of thousands of customers."
)
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
PegasusConfig
[[autodoc]] PegasusConfig
PegasusTokenizer
warning: add_tokens does not work at the moment.
[[autodoc]] PegasusTokenizer
PegasusTokenizerFast
[[autodoc]] PegasusTokenizerFast
PegasusModel
[[autodoc]] PegasusModel
- forward
PegasusForConditionalGeneration
[[autodoc]] PegasusForConditionalGeneration
- forward
PegasusForCausalLM
[[autodoc]] PegasusForCausalLM
- forward
TFPegasusModel
[[autodoc]] TFPegasusModel
- call
TFPegasusForConditionalGeneration
[[autodoc]] TFPegasusForConditionalGeneration
- call
FlaxPegasusModel
[[autodoc]] FlaxPegasusModel
- call
- encode
- decode
FlaxPegasusForConditionalGeneration
[[autodoc]] FlaxPegasusForConditionalGeneration
- call
- encode
- decode |
X-MOD
Overview
The X-MOD model was proposed in Lifting the Curse of Multilinguality by Pre-training Modular Transformers by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe.
X-MOD extends multilingual masked language models like XLM-R to include language-specific modular components (language adapters) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen.
The abstract from the paper is the following:
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
Tips:
- X-MOD is similar to XLM-R, but a difference is that the input language needs to be specified so that the correct language adapter can be activated.
- The main models – base and large – have adapters for 81 languages.
This model was contributed by jvamvas.
The original code can be found here and the original documentation is found here.
Adapter Usage
Input language
There are two ways to specify the input language:
1. By setting a default language before using the model:
thon
from transformers import XmodModel
model = XmodModel.from_pretrained("facebook/xmod-base")
model.set_default_language("en_XX")
By explicitly passing the index of the language adapter for each sample:
thon
import torch
input_ids = torch.tensor(
[
[0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2],
[0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2],
]
)
lang_ids = torch.LongTensor(
[
0, # en_XX
8, # de_DE
]
)
output = model(input_ids, lang_ids=lang_ids)
Fine-tuning
The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided:
thon
model.freeze_embeddings_and_language_adapters()
Fine-tune the model
Cross-lingual transfer
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language:
thon
model.set_default_language("de_DE")
Evaluate the model on German examples
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XmodConfig
[[autodoc]] XmodConfig
XmodModel
[[autodoc]] XmodModel
- forward
XmodForCausalLM
[[autodoc]] XmodForCausalLM
- forward
XmodForMaskedLM
[[autodoc]] XmodForMaskedLM
- forward
XmodForSequenceClassification
[[autodoc]] XmodForSequenceClassification
- forward
XmodForMultipleChoice
[[autodoc]] XmodForMultipleChoice
- forward
XmodForTokenClassification
[[autodoc]] XmodForTokenClassification
- forward
XmodForQuestionAnswering
[[autodoc]] XmodForQuestionAnswering
- forward |
I-BERT
Overview
The I-BERT model was proposed in I-BERT: Integer-only BERT Quantization by
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It's a quantized version of RoBERTa running
inference up to four times faster.
The abstract from the paper is the following:
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language
Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for
efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this,
previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot
efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM
processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes
the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for
nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT
inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using
RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to
the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for
INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has
been open-sourced.
This model was contributed by kssteven. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
IBertConfig
[[autodoc]] IBertConfig
IBertModel
[[autodoc]] IBertModel
- forward
IBertForMaskedLM
[[autodoc]] IBertForMaskedLM
- forward
IBertForSequenceClassification
[[autodoc]] IBertForSequenceClassification
- forward
IBertForMultipleChoice
[[autodoc]] IBertForMultipleChoice
- forward
IBertForTokenClassification
[[autodoc]] IBertForTokenClassification
- forward
IBertForQuestionAnswering
[[autodoc]] IBertForQuestionAnswering
- forward |
Jukebox
Overview
The Jukebox model was proposed in Jukebox: A generative model for music
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on
an artist, genres and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.
As shown on the following figure, Jukebox is made of 3 priors which are decoder only models. They follow the architecture described in Generating Long Sequences with Sparse Transformers, modified to support longer context length.
First, a autoencoder is used to encode the text lyrics. Next, the first (also called top_prior) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an AudioConditionner module. TheAudioConditioner upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution.
The metadata such as artist, genre and timing are passed to each prior, in the form of a start token and positionnal embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio.
Tips:
- This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what's missing to have a full integration with the hugging face traineer!
- This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use accelerate.
- Contrary to the paper, the order of the priors goes from 0 to 1 as it felt more intuitive : we sample starting from 0.
- Primed sampling (conditionning the sampling on raw audio) requires more memory than ancestral sampling and should be used with fp16 set to True.
This model was contributed by Arthur Zucker.
The original code can be found here.
JukeboxConfig
[[autodoc]] JukeboxConfig
JukeboxPriorConfig
[[autodoc]] JukeboxPriorConfig
JukeboxVQVAEConfig
[[autodoc]] JukeboxVQVAEConfig
JukeboxTokenizer
[[autodoc]] JukeboxTokenizer
- save_vocabulary
JukeboxModel
[[autodoc]] JukeboxModel
- ancestral_sample
- primed_sample
- continue_sample
- upsample
- _sample
JukeboxPrior
[[autodoc]] JukeboxPrior
- sample
- forward
JukeboxVQVAE
[[autodoc]] JukeboxVQVAE
- forward
- encode
- decode |
Autoformer
Overview
The Autoformer model was proposed in Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.
The abstract from the paper is the following:
Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.
This model was contributed by elisim and kashif.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Autoformer blog-post in HuggingFace blog: Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
AutoformerConfig
[[autodoc]] AutoformerConfig
AutoformerModel
[[autodoc]] AutoformerModel
- forward
AutoformerForPrediction
[[autodoc]] AutoformerForPrediction
- forward |
XLM
Overview
The XLM model was proposed in Cross-lingual Language Model Pretraining by
Guillaume Lample, Alexis Conneau. It's a transformer pretrained using one of the following objectives:
a causal language modeling (CLM) objective (next token prediction),
a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT's MLM to multiple language inputs)
The abstract from the paper is the following:
Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding.
In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We
propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual
data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain
state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our
approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we
obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised
machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the
previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.
Tips:
XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to
select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation).
XLM has multilingual checkpoints which leverage a specific lang parameter. Check out the multi-lingual page for more information.
A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them:
Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages.
Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens.
A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2.
This model was contributed by thomwolf. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XLMConfig
[[autodoc]] XLMConfig
XLMTokenizer
[[autodoc]] XLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XLM specific outputs
[[autodoc]] models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput
XLMModel
[[autodoc]] XLMModel
- forward
XLMWithLMHeadModel
[[autodoc]] XLMWithLMHeadModel
- forward
XLMForSequenceClassification
[[autodoc]] XLMForSequenceClassification
- forward
XLMForMultipleChoice
[[autodoc]] XLMForMultipleChoice
- forward
XLMForTokenClassification
[[autodoc]] XLMForTokenClassification
- forward
XLMForQuestionAnsweringSimple
[[autodoc]] XLMForQuestionAnsweringSimple
- forward
XLMForQuestionAnswering
[[autodoc]] XLMForQuestionAnswering
- forward
TFXLMModel
[[autodoc]] TFXLMModel
- call
TFXLMWithLMHeadModel
[[autodoc]] TFXLMWithLMHeadModel
- call
TFXLMForSequenceClassification
[[autodoc]] TFXLMForSequenceClassification
- call
TFXLMForMultipleChoice
[[autodoc]] TFXLMForMultipleChoice
- call
TFXLMForTokenClassification
[[autodoc]] TFXLMForTokenClassification
- call
TFXLMForQuestionAnsweringSimple
[[autodoc]] TFXLMForQuestionAnsweringSimple
- call |
AltCLIP
Overview
The AltCLIP model was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP
(Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP's
text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP's capabilities such as multilingual understanding.
The abstract from the paper is the following:
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model.
Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained
multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of
teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art
performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with
CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.
Usage
The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention
and we take the [CLS] token in XLM-R to represent text embedding.
AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
The [CLIPImageProcessor] can be used to resize (or rescale) and normalize images for the model.
The [AltCLIPProcessor] wraps a [CLIPImageProcessor] and a [XLMRobertaTokenizer] into a single instance to both
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
[AltCLIPProcessor] and [AltCLIPModel].
thon
from PIL import Image
import requests
from transformers import AltCLIPModel, AltCLIPProcessor
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
Tips:
This model is build on CLIPModel, so use it like a original CLIP.
This model was contributed by jongjyh.
AltCLIPConfig
[[autodoc]] AltCLIPConfig
- from_text_vision_configs
AltCLIPTextConfig
[[autodoc]] AltCLIPTextConfig
AltCLIPVisionConfig
[[autodoc]] AltCLIPVisionConfig
AltCLIPProcessor
[[autodoc]] AltCLIPProcessor
AltCLIPModel
[[autodoc]] AltCLIPModel
- forward
- get_text_features
- get_image_features
AltCLIPTextModel
[[autodoc]] AltCLIPTextModel
- forward
AltCLIPVisionModel
[[autodoc]] AltCLIPVisionModel
- forward |
Convolutional Vision Transformer (CvT)
Overview
The CvT model was proposed in CvT: Introducing Convolutions to Vision Transformers by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs.
The abstract from the paper is the following:
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT)
in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through
two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer
block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs)
to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention,
global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves
state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition,
performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on
ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding,
a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.
Tips:
CvT models are regular Vision Transformers, but trained with convolutions. They outperform the original model (ViT) when fine-tuned on ImageNet-1K and CIFAR-100.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace [ViTFeatureExtractor] by [AutoImageProcessor] and [ViTForImageClassification] by [CvtForImageClassification]).
The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
This model was contributed by anugunj. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CvT.
[CvtForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
CvtConfig
[[autodoc]] CvtConfig
CvtModel
[[autodoc]] CvtModel
- forward
CvtForImageClassification
[[autodoc]] CvtForImageClassification
- forward
TFCvtModel
[[autodoc]] TFCvtModel
- call
TFCvtForImageClassification
[[autodoc]] TFCvtForImageClassification
- call |
GLPN
This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue.
Overview
The GLPN model was proposed in Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
GLPN combines SegFormer's hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably
less computational complexity.
The abstract from the paper is the following:
Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models.
Tips:
One can use [GLPNImageProcessor] to prepare images for the model.
Summary of the approach. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GLPN.
Demo notebooks for [GLPNForDepthEstimation] can be found here.
Monocular depth estimation task guide
GLPNConfig
[[autodoc]] GLPNConfig
GLPNFeatureExtractor
[[autodoc]] GLPNFeatureExtractor
- call
GLPNImageProcessor
[[autodoc]] GLPNImageProcessor
- preprocess
GLPNModel
[[autodoc]] GLPNModel
- forward
GLPNForDepthEstimation
[[autodoc]] GLPNForDepthEstimation
- forward |
DPR
Overview
Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. It was
introduced in Dense Passage Retrieval for Open-Domain Question Answering by
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih.
The abstract from the paper is the following:
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional
sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can
be practically implemented using dense representations alone, where embeddings are learned from a small number of
questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets,
our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage
retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA
benchmarks.
This model was contributed by lhoestq. The original code can be found here.
Tips:
- DPR consists in three models:
* Question encoder: encode questions as vectors
* Context encoder: encode contexts as vectors
* Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question).
DPRConfig
[[autodoc]] DPRConfig
DPRContextEncoderTokenizer
[[autodoc]] DPRContextEncoderTokenizer
DPRContextEncoderTokenizerFast
[[autodoc]] DPRContextEncoderTokenizerFast
DPRQuestionEncoderTokenizer
[[autodoc]] DPRQuestionEncoderTokenizer
DPRQuestionEncoderTokenizerFast
[[autodoc]] DPRQuestionEncoderTokenizerFast
DPRReaderTokenizer
[[autodoc]] DPRReaderTokenizer
DPRReaderTokenizerFast
[[autodoc]] DPRReaderTokenizerFast
DPR specific outputs
[[autodoc]] models.dpr.modeling_dpr.DPRContextEncoderOutput
[[autodoc]] models.dpr.modeling_dpr.DPRQuestionEncoderOutput
[[autodoc]] models.dpr.modeling_dpr.DPRReaderOutput
DPRContextEncoder
[[autodoc]] DPRContextEncoder
- forward
DPRQuestionEncoder
[[autodoc]] DPRQuestionEncoder
- forward
DPRReader
[[autodoc]] DPRReader
- forward
TFDPRContextEncoder
[[autodoc]] TFDPRContextEncoder
- call
TFDPRQuestionEncoder
[[autodoc]] TFDPRQuestionEncoder
- call
TFDPRReader
[[autodoc]] TFDPRReader
- call |