ai-forever commited on
Commit
213baf5
1 Parent(s): 0fadc68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -3,3 +3,22 @@
3
  **Ru**ssian **D**iffusion **O**n **L**anguage **P**icture **H**yper-modality Transformer
4
 
5
  <img src="https://raw.githubusercontent.com/shonenkov/ru-dolph/master/pics/rudolph-generated.png?token=AHV2MCORXON3DROFL7FBBQ3B4CW4S" height="60" border="2"/>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  **Ru**ssian **D**iffusion **O**n **L**anguage **P**icture **H**yper-modality Transformer
4
 
5
  <img src="https://raw.githubusercontent.com/shonenkov/ru-dolph/master/pics/rudolph-generated.png?token=AHV2MCORXON3DROFL7FBBQ3B4CW4S" height="60" border="2"/>
6
+
7
+
8
+ Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
9
+ * Task: `text2image generation`; `self reranking`; `text reranking`; `image reranking`; `image2text generation`; `zero-shot image classification`;
10
+ * Language: `Russian`
11
+ * Type: `encoder-decoder`
12
+ * Num Parameters: `350M`
13
+ * Training Data Volume: `35 million text-image pairs`
14
+
15
+
16
+ # Model Description
17
+
18
+ RuDOLPH 350M is a fast and light text-image-text transformer (350M GPT-3) designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-Modal Transformers.
19
+
20
+ # Sparse Attention Mask
21
+
22
+ The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities. It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
23
+
24
+ <img src="https://raw.githubusercontent.com/shonenkov/ru-dolph/master/pics/attention_masks.png?token=AHV2MCP7BH3CQBAK74UVA7TB4CXQE" height="40" border="2"/>