ai-forever commited on
Commit
92613ad
1 Parent(s): bc40e9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -23,4 +23,9 @@ RuDOLPH 350M is a fast and light text-image-text transformer (350M GPT-3) design
23
 
24
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
25
 
26
- <img src="https://raw.githubusercontent.com/shonenkov/ru-dolph/master/pics/attention_masks.png?token=AHV2MCP7BH3CQBAK74UVA7TB4CXQE" height="40" border="2"/>
 
 
 
 
 
 
23
 
24
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
25
 
26
+ <img src="https://raw.githubusercontent.com/shonenkov/ru-dolph/master/pics/attention_masks.png?token=AHV2MCP7BH3CQBAK74UVA7TB4CXQE" height="40" border="2"/>
27
+
28
+ # Authors
29
+
30
+ + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
31
+ + Michael Konstantinov: [Mishin Learning](https://t.me/mishin_learning), [Transformer Community](https://transformer.community/)