File size: 2,714 Bytes
fade180
67158dc
fade180
67158dc
c1277dc
67158dc
edf524a
fade180
67158dc
 
 
fade180
 
67158dc
 
 
 
fade180
67158dc
fade180
67158dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# RUDOLPH-2.7B (XL)

RUDOLPH: One Hyper-Tasking Transformer can be creative as DALL-E and smart as CLIP

<img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/RUDOLPH.png" height="60" border="2"/>

Model was trained by [Sber AI](https://github.com/ai-forever) and [AIRI](https://airi.net) teams.  
* Task: `text2image generation`; `self reranking`; `text ranking`; `image ranking`; `image2text generation`; `zero-shot image classification`, `text2text generation`; `text-qa`; 'math-qa'; `image captioning`; `image generation`; `text-in-the-wild`; `vqa`;
* Language: `Russian`
* Type: `decoder`
* Num Parameters: `2.7B`
* Training Data Volume: `119 million text-image pairs; 60 million text paragraphs`
* Fine-tuning Data Volume: `43 334 text question-answer pairs; 100 000 math tasks; 85 000 text-image pairs (for captioning, generation); 85 759 visual question-answer pairs; 140 000 image-text pairs for text recognition`


# Model Description

**RU**ssian **D**ecoder **O**n **L**anguage **P**icture **H**yper-Tasking (RUDOLPH) 2.7B is the largest text-image-text transformer designed for an easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-modality Transformers.

*(!!!) Hyper-Tasking means generalized Multi-Tasking, e.g., the model that can solve almost all tasks within supported modalities (two modalities in case of RUDOLPH: images and Russian texts).

This is a fine-tuned version of the pre-trained RuDOLPH 2.7B model.

The model was prepared as a baseline for AI Journey 2022 (AIJ2) fine-tuned using 6 tasks:

* Text QA – SberQUaD dataset.
* Math QA – DeepMind Mathematics Dataset.
* Captioning – COCO dataset.
* VQA – COCO dataset with prepared question set.
* Generation – COCO dataset.
* Text-in-the-wild – synthesized data.

# Sparse Attention Mask

The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.

![rudolph27b_masks.png](https://s3.amazonaws.com/moonup/production/uploads/1663662426135-5f91b1208a61a359f44e1851.png)

# Authors

+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)