ai-forever commited on
Commit
bf14d65
1 Parent(s): 91ac3ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -2,7 +2,7 @@
2
 
3
  RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP
4
 
5
- <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph-generated.png" height="60" border="2"/>
6
 
7
 
8
  Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
@@ -23,7 +23,7 @@ Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices]
23
 
24
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
25
 
26
- <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/attention_masks.png" height="40" border="2"/>
27
 
28
  # Authors
29
 
 
2
 
3
  RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP
4
 
5
+ <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph-generated.png?token=GHSAT0AAAAAABQH6MSSZP3PVXVAVWPGGYHKYOYRU4A" height="60" border="2"/>
6
 
7
 
8
  Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
 
23
 
24
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
25
 
26
+ <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/attention_masks.png?token=GHSAT0AAAAAABQH6MSTFSJ6ICVIJOU7S7OAYOYRVGQ" height="40" border="2"/>
27
 
28
  # Authors
29