RUDOLPH-2.7B-FBC2 / README.md
sberbank-ai
Update README.md
fade180
|
raw
history blame
2.71 kB

RUDOLPH-2.7B (XL)

RUDOLPH: One Hyper-Tasking Transformer can be creative as DALL-E and smart as CLIP

Model was trained by Sber AI and AIRI teams.

  • Task: text2image generation; self reranking; text ranking; image ranking; image2text generation; zero-shot image classification, text2text generation; text-qa; 'math-qa'; image captioning; image generation; text-in-the-wild; vqa;
  • Language: Russian
  • Type: decoder
  • Num Parameters: 2.7B
  • Training Data Volume: 119 million text-image pairs; 60 million text paragraphs
  • Fine-tuning Data Volume: 43 334 text question-answer pairs; 100 000 math tasks; 85 000 text-image pairs (for captioning, generation); 85 759 visual question-answer pairs; 140 000 image-text pairs for text recognition

Model Description

RUssian Decoder On Language Picture Hyper-Tasking (RUDOLPH) 2.7B is the largest text-image-text transformer designed for an easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-modality Transformers.

*(!!!) Hyper-Tasking means generalized Multi-Tasking, e.g., the model that can solve almost all tasks within supported modalities (two modalities in case of RUDOLPH: images and Russian texts).

This is a fine-tuned version of the pre-trained RuDOLPH 2.7B model.

The model was prepared as a baseline for AI Journey 2022 (AIJ2) fine-tuned using 6 tasks:

  • Text QA – SberQUaD dataset.
  • Math QA – DeepMind Mathematics Dataset.
  • Captioning – COCO dataset.
  • VQA – COCO dataset with prepared question set.
  • Generation – COCO dataset.
  • Text-in-the-wild – synthesized data.

Sparse Attention Mask

The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.

rudolph27b_masks.png

Authors