RUDOLPH-1.3B / README.md
ai-forever's picture
Update README.md
b31f4e8
|
raw
history blame
No virus
2.06 kB

RuDOLPH-1.3B (Large)

RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP

Model was trained by Sber AI and SberDevices teams.

  • Task: text2image generation; self reranking; text ranking; image ranking; image2text generation; zero-shot image classification, text2text generation;
  • Language: Russian
  • Type: decoder
  • Num Parameters: 1.3B
  • Training Data Volume: 119 million text-image pairs; 60 million text paragraphs

Model Description

Russian Diffusion On Language Picture Hyper-modality (RuDOLPH) 1.3B is a large version of fast and light text-image-text transformer designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-modality Transformers.

(!!!) Hyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model

Sparse Attention Mask

The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with both image and left text condition.

rudolph_masks_13b.png

Authors