sberbank-ai
commited on
Commit
•
6850c72
1
Parent(s):
0a6e790
Update README.md
Browse files
README.md
CHANGED
@@ -11,13 +11,14 @@ RUDOLPH: One Hyper-Tasking Transformer Can be Creative as DALL-E and GPT-3 and S
|
|
11 |
|
12 |
<img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/RUDOLPH.png" width=60% border="2"/>
|
13 |
|
|
|
|
|
14 |
# Model Description
|
15 |
|
16 |
**RU**ssian **D**ecoder **O**n **L**anguage **P**icture **H**yper-tasking (**RUDOLPH**) **2.7B** is the largest text-image-text transformer designed for an easy fine-tuning for a range of tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-tasking Transformers.
|
17 |
|
18 |
*Hyper-tasking model is a generalized multi-tasking model, i.e., the model that can solve almost all tasks within supported modalities, mandatory including mutual pairwise translations between modalities (two modalities in case of RUDOLPH: images and Russian texts).*
|
19 |
-
|
20 |
-
Model was trained by [Sber AI](https://github.com/sberbank-ai) team.
|
21 |
* Tasks: ` text2image generation, self reranking, text ranking, image ranking, image2text generation, zero-shot image classification, text2text generation, text qa, math qa, image captioning, image generation, text recognition in the wild, visual qa, and so on`
|
22 |
* Language: ` Russian`
|
23 |
* Type: ` decoder`
|
|
|
11 |
|
12 |
<img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/RUDOLPH.png" width=60% border="2"/>
|
13 |
|
14 |
+
Model was trained by [Sber AI](https://github.com/sberbank-ai) team.
|
15 |
+
|
16 |
# Model Description
|
17 |
|
18 |
**RU**ssian **D**ecoder **O**n **L**anguage **P**icture **H**yper-tasking (**RUDOLPH**) **2.7B** is the largest text-image-text transformer designed for an easy fine-tuning for a range of tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-tasking Transformers.
|
19 |
|
20 |
*Hyper-tasking model is a generalized multi-tasking model, i.e., the model that can solve almost all tasks within supported modalities, mandatory including mutual pairwise translations between modalities (two modalities in case of RUDOLPH: images and Russian texts).*
|
21 |
+
|
|
|
22 |
* Tasks: ` text2image generation, self reranking, text ranking, image ranking, image2text generation, zero-shot image classification, text2text generation, text qa, math qa, image captioning, image generation, text recognition in the wild, visual qa, and so on`
|
23 |
* Language: ` Russian`
|
24 |
* Type: ` decoder`
|