Update README.md
Browse files
README.md
CHANGED
@@ -2,17 +2,44 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
# Taiyi-vit-87M-D
|
6 |
|
7 |
-
|
|
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
```python
|
18 |
from transformers import ViTFeatureExtractor, ViTForImageClassification
|
@@ -34,29 +61,31 @@ print("Predicted class:", model.config.id2label[predicted_class_idx])
|
|
34 |
# Predicted class: Egyptian cat
|
35 |
```
|
36 |
|
37 |
-
|
38 |
|
39 |
-
|
40 |
-
|--------------------------------------|:-------:|:----------:|
|
41 |
-
| clip-vit-base-patch16-224 (official) | 96.2 | 80.2 |
|
42 |
-
| Taiyi-vit-87M-D (local) | 98.7 | 82.4 |
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
|
|
|
|
|
|
|
|
|
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
|
55 |
-
```
|
56 |
@misc{Fengshenbang-LM,
|
57 |
title={Fengshenbang-LM},
|
58 |
author={IDEA-CCNL},
|
59 |
-
year={
|
60 |
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
|
61 |
}
|
62 |
-
```
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
# Taiyi-vit-87M-D
|
6 |
|
7 |
+
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
|
8 |
+
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
|
9 |
|
10 |
+
## 简介 Brief Introduction
|
11 |
|
12 |
+
COCO和VG上特殊预训练的,英文版的MAP(名称暂定)的视觉端ViT-base。
|
13 |
|
14 |
+
Special pre-training on COCO and VG, the visual encoder for MAP (temporary) in English, ViT-base.
|
15 |
|
16 |
+
## 模型分类 Model Taxonomy
|
17 |
+
|
18 |
+
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
|
19 |
+
| :----: | :----: | :----: | :----: | :----: | :----: |
|
20 |
+
| 特殊 Special | 多模态 Multimodal | 太乙 Taiyi | 待定 TBD | 89M | 特殊预训练方法 D |
|
21 |
+
|
22 |
+
## 模型信息 Model Information
|
23 |
+
|
24 |
+
基于clip-vit-base (patch 16, resolution 224x224),我们使用特殊的训练任务引入一些多模态信息。"D"表示这是一种新的预训练方法。对于特殊的多模态表征,在论文中我们设计了集中不同的训练目标。预训练数据集为MSCOCO和VG。我们的代码和预训练任务的细节将在论文接受后公开。
|
25 |
+
|
26 |
+
Based on pre-trained clip-vit-base (patch 16, resolution 224x224), we apply some multimodal information with special pre-training tasks. "D" implies a special training method. For special multimodal representations, we design several special training objectives in our paper. The pre-training datasets are MSCOCO and VG. Our code and details of pre-training tasks will be made publicly available upon paper acceptance.
|
27 |
+
|
28 |
+
### 下游任务 Performance
|
29 |
+
|
30 |
+
| | CIFAR10 | ImageNet1k |
|
31 |
+
|--------------------------------------|:-------:|:----------:|
|
32 |
+
| clip-vit-base-patch16-224 (official) | 96.2 | 80.2 |
|
33 |
+
| Taiyi-vit-87M-D (local) | 98.7 | 82.4 |
|
34 |
+
|
35 |
+
The local test settings are:
|
36 |
+
|
37 |
+
learning rate=2e-5,
|
38 |
+
batch size=128,
|
39 |
+
num train epochs=5,
|
40 |
+
weight decay=0.01
|
41 |
+
|
42 |
+
## 使用 Usage
|
43 |
|
44 |
```python
|
45 |
from transformers import ViTFeatureExtractor, ViTForImageClassification
|
|
|
61 |
# Predicted class: Egyptian cat
|
62 |
```
|
63 |
|
64 |
+
## 引用 Citation
|
65 |
|
66 |
+
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
|
|
|
|
|
|
|
67 |
|
68 |
+
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
|
69 |
|
70 |
+
```text
|
71 |
+
@article{fengshenbang,
|
72 |
+
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
|
73 |
+
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
|
74 |
+
journal = {CoRR},
|
75 |
+
volume = {abs/2209.02970},
|
76 |
+
year = {2022}
|
77 |
+
}
|
78 |
+
```
|
79 |
|
80 |
+
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
|
81 |
|
82 |
+
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
|
83 |
|
84 |
+
```text
|
85 |
@misc{Fengshenbang-LM,
|
86 |
title={Fengshenbang-LM},
|
87 |
author={IDEA-CCNL},
|
88 |
+
year={2021},
|
89 |
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
|
90 |
}
|
91 |
+
```
|