|
# RuDOLPH-2.7B (XL) |
|
|
|
RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP |
|
|
|
<img src="https://github.com/ai-forever/ru-dolph/blob/master/pics/RUDOLPH.png" height="60" border="2"/> |
|
|
|
Model was trained by [Sber AI](https://github.com/ai-forever) and [AIRI](https://airi.net) teams. |
|
* Task: `text2image generation`; `self reranking`; `text ranking`; `image ranking`; `image2text generation`; `zero-shot image classification`, `text2text generation`, 'text-qa', 'math-qa', 'image captioning', 'image generation', 'text-in-the-wild', 'vqa'; |
|
* Language: `Russian` |
|
* Type: `decoder` |
|
* Num Parameters: `2.7B` |
|
* Training Data Volume: `119 million text-image pairs; 60 million text paragraphs; 43 334 text question-answer pairs; 100 000 math tasks; 85 000 text-image pairs (for captioning, generation); 85 759 visual question-answer pairs; 140 000 image-text pairs for text recognition` |
|
|
|
|
|
# Model Description |
|
|
|
**Ru**ssian **D**iffusion **O**n **L**anguage **P**icture **H**yper-modality (RuDOLPH) 2.7B is a fast and light text-image-text transformer designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-modality Transformers. |
|
|
|
*(!!!) Hyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model* |
|
|
|
This is a fine-tuned version of the pre-trained RuDOLPH 2.7B model. |
|
|
|
The model was prepared as a baseline for AI Journey 2022 (AIJ2) fine-tuned using 6 tasks: |
|
|
|
* Text QA β SberQUaD dataset. |
|
* Math QA β DeepMind Mathematics Dataset. |
|
* Captioning β COCO dataset. |
|
* VQA β COCO dataset with prepared question set. |
|
* Generation β COCO dataset. |
|
* Text-in-the-wild β synthesized data. |
|
|
|
# Sparse Attention Mask |
|
|
|
The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition. |
|
|
|
![rudolph27b_masks.png](https://s3.amazonaws.com/moonup/production/uploads/1663662426135-5f91b1208a61a359f44e1851.png) |
|
|
|
# Authors |
|
|
|
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov) |