Update README.md
#3
by
Ekado
- opened
README.md
CHANGED
@@ -3,6 +3,14 @@ license: cc-by-nc-4.0
|
|
3 |
tags:
|
4 |
- text-to-video
|
5 |
duplicated_from: diffusers/text-to-video-ms-1.7b-legacy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
---
|
7 |
|
8 |
# Text-to-video-synthesis Model in Open Domain
|
@@ -128,5 +136,4 @@ The output mp4 file can be viewed by [VLC media player](https://www.videolan.org
|
|
128 |
|
129 |
The training data includes [LAION5B](https://huggingface.co/datasets/laion/laion2B-en), [ImageNet](https://www.image-net.org/), [Webvid](https://m-bain.github.io/webvid-dataset/) and other public datasets. Image and video filtering is performed after pre-training such as aesthetic score, watermark score, and deduplication.
|
130 |
|
131 |
-
_(Part of this model card has been taken from [here](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis))_
|
132 |
-
|
|
|
3 |
tags:
|
4 |
- text-to-video
|
5 |
duplicated_from: diffusers/text-to-video-ms-1.7b-legacy
|
6 |
+
datasets:
|
7 |
+
- fka/awesome-chatgpt-prompts
|
8 |
+
language:
|
9 |
+
- ae
|
10 |
+
metrics:
|
11 |
+
- accuracy
|
12 |
+
library_name: adapter-transformers
|
13 |
+
pipeline_tag: text-to-video
|
14 |
---
|
15 |
|
16 |
# Text-to-video-synthesis Model in Open Domain
|
|
|
136 |
|
137 |
The training data includes [LAION5B](https://huggingface.co/datasets/laion/laion2B-en), [ImageNet](https://www.image-net.org/), [Webvid](https://m-bain.github.io/webvid-dataset/) and other public datasets. Image and video filtering is performed after pre-training such as aesthetic score, watermark score, and deduplication.
|
138 |
|
139 |
+
_(Part of this model card has been taken from [here](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis))_
|
|