RUDOLPH-350M / README.md
ai-forever's picture
Update README.md
213baf5
|
raw
history blame
1.68 kB

RuDOLPH-350M (Medium)

Russian Diffusion On Language Picture Hyper-modality Transformer

Model was trained by Sber AI and SberDevices teams.

  • Task: text2image generation; self reranking; text reranking; image reranking; image2text generation; zero-shot image classification;
  • Language: Russian
  • Type: encoder-decoder
  • Num Parameters: 350M
  • Training Data Volume: 35 million text-image pairs

Model Description

RuDOLPH 350M is a fast and light text-image-text transformer (350M GPT-3) designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-Modal Transformers.

Sparse Attention Mask

The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities. It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.