shonenkov commited on
Commit
ad95e5b
1 Parent(s): 8616eeb

update weights

Browse files
Files changed (2) hide show
  1. README.md +4 -4
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -2,7 +2,7 @@
2
 
3
  RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP
4
 
5
- <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph-generated.png?token=GHSAT0AAAAAABQH6MSSZP3PVXVAVWPGGYHKYOYRU4A" height="60" border="2"/>
6
 
7
 
8
  Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
@@ -10,7 +10,7 @@ Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices]
10
  * Language: `Russian`
11
  * Type: `encoder-decoder`
12
  * Num Parameters: `350M`
13
- * Training Data Volume: `35 million text-image pairs`
14
 
15
 
16
  # Model Description
@@ -23,9 +23,9 @@ Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices]
23
 
24
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
25
 
26
- <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/attention_masks.png?token=GHSAT0AAAAAABQH6MSTFSJ6ICVIJOU7S7OAYOYRVGQ" height="40" border="2"/>
27
 
28
  # Authors
29
 
30
  + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
31
- + Michael Konstantinov: [Mishin Learning](https://t.me/mishin_learning), [Transformer Community](https://transformer.community/)
 
2
 
3
  RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP
4
 
5
+ <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph-generated.png" height="60" border="2"/>
6
 
7
 
8
  Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
 
10
  * Language: `Russian`
11
  * Type: `encoder-decoder`
12
  * Num Parameters: `350M`
13
+ * Training Data Volume: `156 million text-image pairs`
14
 
15
 
16
  # Model Description
 
23
 
24
  The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
25
 
26
+ <img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/attention_masks.png" height="40" border="2"/>
27
 
28
  # Authors
29
 
30
  + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
31
+ + Michael Konstantinov: [Mishin Learning](https://t.me/mishin_learning), [Transformer Community](https://transformer.community/)
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:18cc11ff7e7911ad3e18948f51d39ba6440050815e284af9f9b14064f77b2440
3
  size 707460385
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60803e0119ce050d4e9a235ab574a213e490d27cef93d90f3e6dd7495adcf7e1
3
  size 707460385